edit configMap powerflex-urls
+```
+
+In the `data` field, navigate towards the bottom of this field where you see `default allow = false`. This is highlighted in **bold** in the example below. Replace `false` with `true` and save the edit.
+
+
+data:
+ url.rego: "# Copyright © 2022 Dell Inc., or its subsidiaries. All Rights Reserved.\n#\n#
+ Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not
+ use this file except in compliance with the License.\n# You may obtain a copy
+ of the License at\n#\n# http:#www.apache.org/licenses/LICENSE-2.0\n#\n# Unless
+ required by applicable law or agreed to in writing, software\n# distributed under
+ the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS
+ OF ANY KIND, either express or implied.\n# See the License for the specific language
+ governing permissions and\n# limitations under the License.\n\npackage karavi.authz.url\n\nallowlist
+ = [\n \"GET /api/login/\",\n\t\t\"POST /proxy/refresh-token/\",\n\t\t\"GET
+ /api/version/\",\n\t\t\"GET /api/types/System/instances/\",\n\t\t\"GET /api/types/StoragePool/instances/\",\n\t\t\"POST
+ /api/types/Volume/instances/\",\n\t\t\"GET /api/instances/Volume::[a-f0-9]+/$\",\n\t\t\"POST
+ /api/types/Volume/instances/action/queryIdByKey/\",\n\t\t\"GET /api/instances/System::[a-f0-9]+/relationships/Sdc/\",\n\t\t\"GET
+ /api/instances/Sdc::[a-f0-9]+/relationships/Statistics/\",\n\t\t\"GET /api/instances/Sdc::[a-f0-9]+/relationships/Volume/\",\n\t\t\"GET
+ /api/instances/Volume::[a-f0-9]+/relationships/Statistics/\",\n\t\t\"GET /api/instances/StoragePool::[a-f0-9]+/relationships/Statistics/\",\n\t\t\"POST
+ /api/instances/Volume::[a-f0-9]+/action/addMappedSdc/\",\n\t\t\"POST /api/instances/Volume::[a-f0-9]+/action/removeMappedSdc/\",\n\t\t\"POST
+ /api/instances/Volume::[a-f0-9]+/action/removeVolume/\"\n]\n\ndefault allow =
+ false\nallow {\n\tregex.match(allowlist[_], sprintf(\"%s %s\", [input.method,
+ input.url]))\n}\n"
+
+
+Edited data:
+
+
+data:
+ url.rego: "# Copyright © 2022 Dell Inc., or its subsidiaries. All Rights Reserved.\n#\n#
+ Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not
+ use this file except in compliance with the License.\n# You may obtain a copy
+ of the License at\n#\n# http:#www.apache.org/licenses/LICENSE-2.0\n#\n# Unless
+ required by applicable law or agreed to in writing, software\n# distributed under
+ the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS
+ OF ANY KIND, either express or implied.\n# See the License for the specific language
+ governing permissions and\n# limitations under the License.\n\npackage karavi.authz.url\n\nallowlist
+ = [\n \"GET /api/login/\",\n\t\t\"POST /proxy/refresh-token/\",\n\t\t\"GET
+ /api/version/\",\n\t\t\"GET /api/types/System/instances/\",\n\t\t\"GET /api/types/StoragePool/instances/\",\n\t\t\"POST
+ /api/types/Volume/instances/\",\n\t\t\"GET /api/instances/Volume::[a-f0-9]+/$\",\n\t\t\"POST
+ /api/types/Volume/instances/action/queryIdByKey/\",\n\t\t\"GET /api/instances/System::[a-f0-9]+/relationships/Sdc/\",\n\t\t\"GET
+ /api/instances/Sdc::[a-f0-9]+/relationships/Statistics/\",\n\t\t\"GET /api/instances/Sdc::[a-f0-9]+/relationships/Volume/\",\n\t\t\"GET
+ /api/instances/Volume::[a-f0-9]+/relationships/Statistics/\",\n\t\t\"GET /api/instances/StoragePool::[a-f0-9]+/relationships/Statistics/\",\n\t\t\"POST
+ /api/instances/Volume::[a-f0-9]+/action/addMappedSdc/\",\n\t\t\"POST /api/instances/Volume::[a-f0-9]+/action/removeMappedSdc/\",\n\t\t\"POST
+ /api/instances/Volume::[a-f0-9]+/action/removeVolume/\"\n]\n\ndefault allow =
+ true\nallow {\n\tregex.match(allowlist[_], sprintf(\"%s %s\", [input.method,
+ input.url]))\n}\n"
+
+
+2. Rollout restart the CSM Authorization proxy-server so the policy change gets applied.
+
+```
+kubectl -n rollout restart deploy/proxy-server
+```
+
+3. Optionally, rollout restart the CSI Driver for Dell PowerFlex to restart the driver pods. Alternatively, wait for the Kubernetes CrashLoopBackoff behavior to restart the driver.
+
+```
+kubectl -n rollout restart deploy/vxflexos-controller
+kubectl -n rollout restart daemonSet/vxflexos-node
+```
diff --git a/content/v1/csidriver/_index.md b/content/v1/csidriver/_index.md
index 495c29b500..732f364787 100644
--- a/content/v1/csidriver/_index.md
+++ b/content/v1/csidriver/_index.md
@@ -14,16 +14,16 @@ The CSI Drivers by Dell implement an interface between [CSI](https://kubernetes-
### Supported Operating Systems/Container Orchestrator Platforms
{{}}
-| | PowerMax | PowerFlex | Unity | PowerScale | PowerStore |
+| | PowerMax | PowerFlex | Unity XT | PowerScale | PowerStore |
|---------------|:----------------:|:-------------------:|:----------------:|:-----------------:|:----------------:|
-| Kubernetes | 1.21, 1.22, 1.23 | 1.21, 1.22, 1.23 | 1.21, 1.22, 1.23 | 1.21, 1.22, 1.23 | 1.21, 1.22, 1.23 |
+| Kubernetes | 1.22, 1.23, 1.24 | 1.22, 1.23, 1.24 | 1.22, 1.23, 1.24 | 1.22, 1.23, 1.24 | 1.22, 1.23, 1.24 |
| RHEL | 7.x,8.x | 7.x,8.x | 7.x,8.x | 7.x,8.x | 7.x,8.x |
| Ubuntu | 20.04 | 20.04 | 18.04, 20.04 | 18.04, 20.04 | 20.04 |
| CentOS | 7.8, 7.9 | 7.8, 7.9 | 7.8, 7.9 | 7.8, 7.9 | 7.8, 7.9 |
| SLES | 15SP3 | 15SP3 | 15SP3 | 15SP3 | 15SP3 |
-| Red Hat OpenShift | 4.8, 4.8 EUS, 4.9 | 4.8, 4.8 EUS, 4.9 | 4.8, 4.8 EUS, 4.9 | 4.8, 4.8 EUS, 4.9 | 4.8, 4.8 EUS, 4.9 |
-| Mirantis Kubernetes Engine | 3.4.x | 3.4.x | 3.5.x | 3.4.x | 3.4.x |
-| Google Anthos | 1.6 | 1.8 | no | 1.9 | 1.9 |
+| Red Hat OpenShift | 4.9, 4.10, 4.10 EUS | 4.9, 4.10, 4.10 EUS | 4.9, 4.10, 4.10 EUS | 4.9, 4.10, 4.10 EUS | 4.9, 4.10, 4.10 EUS |
+| Mirantis Kubernetes Engine | 3.5.x | 3.5.x | 3.5.x | 3.5.x | 3.5.x |
+| Google Anthos | 1.9 | 1.8 | no | 1.9 | 1.9 |
| VMware Tanzu | no | no | NFS | NFS | NFS |
| Rancher Kubernetes Engine | yes | yes | yes | yes | yes |
| Amazon Elastic Kubernetes Service
Anywhere | no | yes | no | no | yes |
@@ -32,39 +32,40 @@ The CSI Drivers by Dell implement an interface between [CSI](https://kubernetes-
### CSI Driver Capabilities
{{}}
-| Features | PowerMax | PowerFlex | Unity | PowerScale | PowerStore |
-|--------------------------|:--------:|:---------:|:------:|:----------:|:----------:|
-| CSI Driver version | 2.2.0 | 2.2.0 | 2.2.0 | 2.2.0 | 2.2.0 |
-| Static Provisioning | yes | yes | yes | yes | yes |
-| Dynamic Provisioning | yes | yes | yes | yes | yes |
-| Expand Persistent Volume | yes | yes | yes | yes | yes |
-| Create VolumeSnapshot | yes | yes | yes | yes | yes |
-| Create Volume from Snapshot | yes | yes | yes | yes | yes |
-| Delete Snapshot | yes | yes | yes | yes | yes |
-| [Access Mode](https://kubernetes.io/docs/concepts/storage/persistent-volumes/#access-modes)| RWO/
RWOP(FC/iSCSI)
RWO/
RWX/
ROX/
RWOP(Raw block) | RWO/ROX/RWOP
RWX (Raw block only) | RWO/ROX/RWOP
RWX (Raw block & NFS only) | RWO/RWX/ROX/
RWOP | RWO/RWOP
(FC/iSCSI)
RWO/
RWX/
ROX/
RWOP
(RawBlock, NFS) |
-| CSI Volume Cloning | yes | yes | yes | yes | yes |
-| CSI Raw Block Volume | yes | yes | yes | no | yes |
-| CSI Ephemeral Volume | no | yes | yes | yes | yes |
-| Topology | yes | yes | yes | yes | yes |
-| Multi-array | yes | yes | yes | yes | yes |
-| Volume Health Monitoring | yes | yes | yes | yes | yes |
+| Features | PowerMax | PowerFlex | Unity XT | PowerScale | PowerStore |
+|--------------------------|:--------:|:---------:|:---------:|:----------:|:----------:|
+| CSI Driver version | 2.3.0 | 2.3.0 | 2.3.0 | 2.3.0 | 2.3.0 |
+| Static Provisioning | yes | yes | yes | yes | yes |
+| Dynamic Provisioning | yes | yes | yes | yes | yes |
+| Expand Persistent Volume | yes | yes | yes | yes | yes |
+| Create VolumeSnapshot | yes | yes | yes | yes | yes |
+| Create Volume from Snapshot | yes | yes | yes | yes | yes |
+| Delete Snapshot | yes | yes | yes | yes | yes |
+| [Access Mode](https://kubernetes.io/docs/concepts/storage/persistent-volumes/#access-modes)| **FC/iSCSI:**
RWO/
RWOP
**Raw block:**
RWO/
RWX/
ROX/
RWOP | RWO/ROX/RWOP
RWX (Raw block only) | RWO/ROX/RWOP
RWX (Raw block & NFS only) | RWO/RWX/ROX/
RWOP | RWO/RWOP
(FC/iSCSI)
RWO/
RWX/
ROX/
RWOP
(RawBlock, NFS) |
+| CSI Volume Cloning | yes | yes | yes | yes | yes |
+| CSI Raw Block Volume | yes | yes | yes | no | yes |
+| CSI Ephemeral Volume | no | yes | yes | yes | yes |
+| Topology | yes | yes | yes | yes | yes |
+| Multi-array | yes | yes | yes | yes | yes |
+| Volume Health Monitoring | yes | yes | yes | yes | yes |
{{
}}
### Supported Storage Platforms
{{}}
-| | PowerMax | PowerFlex | Unity | PowerScale | PowerStore |
+| | PowerMax | PowerFlex | Unity XT | PowerScale | PowerStore |
|---------------|:-------------------------------------------------------:|:----------------:|:--------------------------:|:----------------------------------:|:----------------:|
-| Storage Array |5978.479.479, 5978.711.711
Unisphere 9.2| 3.5.x, 3.6.x | 5.0.7, 5.1.0, 5.1.2 | OneFS 8.1, 8.2, 9.0, 9.1, 9.2, 9.3 | 1.0.x, 2.0.x, 2.1.x |
+| Storage Array |5978.479.479, 5978.711.711
Unisphere 9.2| 3.5.x, 3.6.x | 5.0.7, 5.1.0, 5.1.2 | OneFS 8.1, 8.2, 9.0, 9.1, 9.2, 9.3, 9.4 | 1.0.x, 2.0.x, 2.1.x, 3.0 |
{{
}}
### Backend Storage Details
{{}}
-| Features | PowerMax | PowerFlex | Unity | PowerScale | PowerStore |
+| Features | PowerMax | PowerFlex | Unity XT | PowerScale | PowerStore |
|---------------|:----------------:|:------------------:|:----------------:|:----------------:|:----------------:|
| Fibre Channel | yes | N/A | yes | N/A | yes |
| iSCSI | yes | N/A | yes | N/A | yes |
| NVMeTCP | N/A | N/A | N/A | N/A | yes |
+| NVMeFC | N/A | N/A | N/A | N/A | yes |
| NFS | N/A | N/A | yes | yes | yes |
| Other | N/A | ScaleIO protocol | N/A | N/A | N/A |
| Supported FS | ext4 / xfs | ext4 / xfs | ext3 / ext4 / xfs / NFS | NFS | ext3 / ext4 / xfs / NFS |
| Thin / Thick provisioning | Thin | Thin | Thin/Thick | N/A | Thin |
| Platform-specific configurable settings | Service Level selection
iSCSI CHAP | - | Host IO Limit
Tiering Policy
NFS Host IO size
Snapshot Retention duration | Access Zone
NFS version (3 or 4);Configurable Export IPs | iSCSI CHAP |
-{{
}}
+{{
}}
\ No newline at end of file
diff --git a/content/v1/csidriver/archives/_index.md b/content/v1/csidriver/archives/_index.md
deleted file mode 100644
index c6df42da23..0000000000
--- a/content/v1/csidriver/archives/_index.md
+++ /dev/null
@@ -1,43 +0,0 @@
----
-title: Archives
-description: Product Guide and Release Notes for previous versions of Dell CSI drivers
----
-
-## PowerScale
-### v1.3
--[Release Notes](/pdf/RN_isilon.pdf)
-
--[Product Guide](/pdf/PG_isilon.pdf)
-
-### v1.2
-
--[Release Notes](/pdf/RN_isilon_2.pdf)
-
--[Product Guide](/pdf/PG_isilon_2.pdf)
-
-## PowerMax
-
-### v1.4
--[Release Notes](/pdf/RN_powermax.pdf)
-
--[Product Guide](/pdf/PG_powermax.pdf)
-
-## PowerFlex
-
-### v1.2
--[Release Notes](/pdf/RN_vxflex.pdf)
-
--[Product Guide](/pdf/PG_vxflex.pdf)
-
-## PowerStore
-### v1.1
--[Release Notes](/pdf/RN_powerstore.pdf)
-
--[Product Guide](/pdf/PG_powerstore.pdf)
-
-## Unity
-### v1.3
--[Release Notes](/pdf/RN_unity.pdf)
-
--[Product Guide](/pdf/PG_unity.pdf)
-
diff --git a/content/v1/csidriver/features/powerflex.md b/content/v1/csidriver/features/powerflex.md
index 6353aa6f58..cfc331a718 100644
--- a/content/v1/csidriver/features/powerflex.md
+++ b/content/v1/csidriver/features/powerflex.md
@@ -7,7 +7,7 @@ Description: Code features for PowerFlex Driver
## Volume Snapshot Feature
-The CSI PowerFlex driver version 2.0 and higher supports v1 snapshots on Kubernetes 1.21/1.22/1.23.
+The CSI PowerFlex driver versions 2.0 and higher support v1 snapshots.
In order to use Volume Snapshots, ensure the following components are deployed to your cluster:
- Kubernetes Volume Snapshot CRDs
@@ -82,35 +82,7 @@ spec:
## Create Consistent Snapshot of Group of Volumes
-This feature extends CSI specification to add the capability to create crash-consistent snapshots of a group of volumes. This feature is available as a technical preview. To use this feature, users have to deploy the csi-volumegroupsnapshotter side-car as part of the PowerFlex driver. Once the sidecar has been deployed, users can make snapshots by using yaml files such as this one:
-```
-apiVersion: volumegroup.storage.dell.com/v1
-kind: DellCsiVolumeGroupSnapshot
-metadata:
- name: "vg-snaprun1"
- namespace: "helmtest-vxflexos"
-spec:
- # Add fields here
- driverName: "csi-vxflexos.dellemc.com"
- # defines how to process VolumeSnapshot members when volume group snapshot is deleted
- # "Retain" - keep VolumeSnapshot instances
- # "Delete" - delete VolumeSnapshot instances
- memberReclaimPolicy: "Retain"
- volumesnapshotclass: "vxflexos-snapclass"
- pvcLabel: "vgs-snap-label"
- # pvcList:
- # - "pvcName1"
- # - "pvcName2"
-```
-The pvcLabel field specifies a label that must be present in PVCs that are to be snapshotted. Here is a sample of that portion of a .yaml for a PVC:
-```
-metadata:
- name: pvol0
- namespace: helmtest-vxflexos
- labels:
- volume-group: vgs-snap-label
-```
-More details about the installation and use of the VolumeGroup Snapshotter can be found here: [dell-csi-volumegroup-snapshotter](https://github.com/dell/csi-volumegroup-snapshotter).
+This feature extends CSI specification to add the capability to create crash-consistent snapshots of a group of volumes. This feature is available as a technical preview. To use this feature, users have to deploy the csi-volumegroupsnapshotter side-car as part of the PowerFlex driver. Once the sidecar has been deployed, users can make snapshots by using yaml files, More information can be found here: [Volume Group Snapshotter](../../../snapshots/volume-group-snapshots/).
## Volume Expansion Feature
@@ -398,9 +370,9 @@ controller:
- key: "node-role.kubernetes.io/master"
operator: "Exists"
effect: "NoSchedule"
-```
+```
> *NOTE:* Tolerations/selectors work the same way for node pods.
-
+
For configuring Controller HA on the Dell CSI Operator, please refer to the [Dell CSI Operator documentation](../../installation/operator/#custom-resource-specification).
## SDC Deployment
@@ -450,7 +422,7 @@ There is a sample yaml file in the samples folder under the top-level directory
endpoint: "https://127.0.0.2"
skipCertificateValidation: true
mdm: "10.0.0.3,10.0.0.4"
- ```
+ ```
Here we specify that we want the CSI driver to manage two arrays: one with an IP `127.0.0.1` and the other with an IP `127.0.0.2`.
To use this config we need to create a Kubernetes secret from it. To do so, run the following command:
@@ -546,7 +518,7 @@ To run the corresponding helm test, go to csi-vxflexos/test/helm/ephemeral and f
Then run:
````
./testEphemeral.sh
-````
+````
this test deploys the pod with two ephemeral volumes, and write some data to them before deleting the pod.
When creating ephemeral volumes, it is important to specify the following within the volumeAttributes section: volumeName, size, storagepool, and if you want to use a non-default array, systemID.
@@ -587,7 +559,7 @@ Events:
Type Reason Age From Message
---- ------ ---- ---- ------
Warning VolumeConditionAbnormal 32s csi-pv-monitor-controller-csi-vxflexos.dellemc.com Volume is not found at 2021-11-03 20:31:04
-```
+```
Events will also be reported to pods that have abnormal volumes. In these two events from `kubectl describe pods -n `, we can see that this pod has two abnormal volumes: one volume was unmounted outside of Kubernetes, while another was deleted from PowerFlex array.
```
Events:
diff --git a/content/v1/csidriver/features/powermax.md b/content/v1/csidriver/features/powermax.md
index a635b79ec6..697c1040b1 100644
--- a/content/v1/csidriver/features/powermax.md
+++ b/content/v1/csidriver/features/powermax.md
@@ -399,7 +399,7 @@ After a successful installation of the driver, if a node Pod is running successf
The values for all these keys are always set to the name of the provisioner which is usually `csi-powermax.dellemc.com`.
-> *NOTE:* The Topology support does not include any customer-defined topology, that is, users cannot create their own labels for nodes and storage classes and expect the labels to be honored by the driver.
+Starting from version 2.3.0, topology keys have been enhanced to filter out arrays, associated transport protocol available to each node and create topology keys based on any such user input.
### Topology Usage
To use the Topology feature, the storage classes must be modified as follows:
@@ -437,6 +437,80 @@ on any worker node with access to the PowerMax array `000000000001` irrespective
For additional information on how to use _Topology aware Volume Provisioning_, see the [Kubernetes Topology documentation](https://kubernetes-csi.github.io/docs/topology.html).
+### Custom Topology keys
+To use the enhanced topology keys:
+1. To use this feature, set node.topologyControl.enabled to true.
+2. Edit the config file [topologyConfig.yaml](https://github.com/dell/csi-powermax/blob/main/samples/configmap/topologyConfig.yaml) in `csi-powermax/samples/configmap` folder and provide values for the following parameters.
+
+| Parameter | Description |
+|-----------|--------------|
+| allowedConnections | List of node, array and protocol info for user allowed configuration |
+| allowedConnections.nodeName | Name of the node on which user wants to apply given rules |
+| allowedConnections.rules | List of StorageArrayID:TransportProtocol pair |
+| deniedConnections | List of node, array and protocol info for user denied configuration |
+| deniedConnections.nodeName | Name of the node on which user wants to apply given rules |
+| deniedConnections.rules | List of StorageArrayID:TransportProtocol pair |
+
+
+
+**Sample config file:**
+
+```
+# allowedConnections contains a list of (node, array and protocol) info for user allowed configuration
+# For any given storage array ID and protocol on a Node, topology keys will be created for just those pair and
+# every other configuration is ignored
+# Please refer to the doc website about a detailed explanation of each configuration parameter
+# and the various possible inputs
+allowedConnections:
+ # nodeName: Name of the node on which user wants to apply given rules
+ # Allowed values:
+ # nodeName - name of a specific node
+ # * - all the nodes
+ # Examples: "node1", "*"
+ - nodeName: "node1"
+ # rules is a list of 'StorageArrayID:TransportProtocol' pair. ':' is required between both value
+ # Allowed values:
+ # StorageArrayID:
+ # - SymmetrixID : for specific storage array
+ # - "*" :- for all the arrays connected to the node
+ # TransportProtocol:
+ # - FC : Fibre Channel protocol
+ # - ISCSI : iSCSI protocol
+ # - "*" - for all the possible Transport Protocol
+ # Examples: "000000000001:FC", "000000000002:*", "*:FC", "*:*"
+ rules:
+ - "000000000001:FC"
+ - "000000000002:FC"
+ - nodeName: "*"
+ rules:
+ - "000000000002:FC"
+# deniedConnections contains a list of (node, array and protocol) info for denied configurations by user
+# For any given storage array ID and protocol on a Node, topology keys will be created for every other configuration but
+# not these input pairs
+deniedConnections:
+ - nodeName: "node2"
+ rules:
+ - "000000000002:*"
+ - nodeName: "node3"
+ rules:
+ - "*:*"
+```
+
+3. Use the below command to create ConfigMap with configmap name as `node-topology-config` in the namespace powermax,
+
+`kubectl create configmap node-topology-config --from-file=topologyConfig.yaml -n powermax`
+
+For example, let there be 3 nodes and 2 arrays, so based on the sample config file above, topology keys will be created as below:
+
+New Topology keys
+N1: csi-driver/000000000001.FC:csi-driver, csi-driver/000000000002.FC:csi-driver
+
+N2 and N3: None
+
+
+>Note: Name of the configmap should always be `node-topology-config`.
+
+
## Dynamic Logging Configuration
This feature is introduced in CSI Driver for PowerMax version 2.0.0.
diff --git a/content/v1/csidriver/features/powerstore.md b/content/v1/csidriver/features/powerstore.md
index 1f5b1fb50e..e4a3103b11 100644
--- a/content/v1/csidriver/features/powerstore.md
+++ b/content/v1/csidriver/features/powerstore.md
@@ -541,7 +541,7 @@ The value of that parameter is added as an additional entry to NFS Export host a
For example the following notation:
```yaml
externalAccess: "10.0.0.0/24"
-```
+```
This means that we allow for NFS Export created by driver to be consumed by address range `10.0.0.0-10.0.0.255`.
@@ -668,10 +668,65 @@ nfsAcls: "A::OWNER@:rwatTnNcCy,A::GROUP@:rxtncy,A::EVERYONE@:rxtncy,A::user@doma
>POSIX ACLs are not supported and only POSIX mode bits are supported for NFSv3 shares.
-## NVMe/TCP Support
-
-CSI Driver for Dell Powerstore 2.2.0 and above supports NVMe/TCP provisioning. To enable NVMe/TCP provisioning, blockProtocol on secret should be specified as `NVMeTCP`.
-In case blockProtocol is specified as `auto`, the driver will be able to find the initiators on the host and choose the protocol accordingly. If the host has multiple protocols enabled, then FC gets the highest priority followed by iSCSI and then NVMeTCP.
+## NVMe Support
+**NVMeTCP Support**
+CSI Driver for Dell Powerstore 2.2.0 and above supports NVMe/TCP provisioning. To enable NVMe/TCP provisioning, blockProtocol on secret should be specified as `NVMeTCP`.
>Note: NVMe/TCP is not supported on RHEL 7.x versions and CoreOS.
>NVMe/TCP is supported with Powerstore 2.1 and above.
+
+**NVMeFC Support**
+CSI Driver for Dell Powerstore 2.3.0 and above supports NVMe/FC provisioning. To enable NVMe/FC provisioning, blockProtocol on secret should be specified as `NVMeFC`.
+>NVMe/FC is supported with Powerstore 3.0 and above.
+
+>NVMe-FC feature is supported with Helm.
+
+>Note:
+> In case blockProtocol is specified as `auto`, the driver will be able to find the initiators on the host and choose the protocol accordingly. If the host has multiple protocols enabled, then NVMeFC gets the highest priority followed by NVMeTCP, followed by FC and then iSCSI.
+
+## Volume group snapshot Support
+
+CSI Driver for Dell Powerstore 2.3.0 and above supports creating volume groups and take snapshot of them by making use of CRD (Custom Resource Definition). More information can be found here: [Volume Group Snapshotter](../../../snapshots/volume-group-snapshots/).
+
+## Configurable Volume Attributes (Optional)
+
+The CSI PowerStore driver version 2.3.0 and above supports Configurable volume atttributes.
+
+PowerStore array provides a set of optional volume creation attributes. These attributes can be configured for the volume (block and NFS) at the time of creation through PowerStore CSI driver.
+These attributes can be specified as labels in PVC yaml file. The following is a sample manifest for creating volume with some of the configurable volume attributes.
+
+```yaml
+apiVersion: v1
+kind: PersistentVolumeClaim
+metadata:
+ name: pvc1
+ namespace: default
+ labels:
+ description: DB-volume
+ appliance_id: A1
+ volume_group_id: f5f9dbbd-d12f-463e-becb-2e6d0a85405e
+spec:
+ accessModes:
+ - ReadWriteOnce
+ resources:
+ requests:
+ storage: 8Gi
+ storageClassName: powerstore-ext4
+
+```
+
+>Note: Default description value is `pvcName-pvcNamespace`.
+
+The following is the list of all the attribtues supported by PowerStore CSI driver:
+
+| Block Volume | NFS Volume |
+| --- | --- |
+| description
appliance_id
volume_group_id
protection_policy_id
performance_policy_id
app_type
app_type_other
| description
config_type
access_policy
locking_policy
folder_rename_policy
is_async_mtime_enabled
protection_policy_id
file_events_publishing_mode
host_io_size
flr_attributes.flr_create.mode
flr_attributes.flr_create.default_retention
flr_attributes.flr_create.maximum_retention
flr_attributes.flr_create.minimum_retention |
+
+
+
+**Note:**
+>Refer to the PowerStore array specification for the allowed values for each attribute, at `https:///swaggerui/`.
+>Make sure that the attributes specified are supported by the version of PowerStore array used.
+
+>Configurable Volume Attributes feature is supported with Helm.
diff --git a/content/v1/csidriver/features/unity.md b/content/v1/csidriver/features/unity.md
index 7559245396..4cac022944 100644
--- a/content/v1/csidriver/features/unity.md
+++ b/content/v1/csidriver/features/unity.md
@@ -1,6 +1,6 @@
---
-title: Unity
-Description: Code features for Unity Driver
+title: Unity XT
+Description: Code features for Unity XT Driver
weight: 1
---
@@ -30,9 +30,9 @@ kubectl delete -f test/sample.yaml
## Consuming existing volumes with static provisioning
-You can use existent volumes from Unity array as Persistent Volumes in your Kubernetes, to do that you must perform the following steps:
+You can use existent volumes from Unity XT array as Persistent Volumes in your Kubernetes, to do that you must perform the following steps:
-1. Open your volume in Unity Management UI (Unisphere), and take a note of volume-id. The `volume-id` looks like `csiunity-xxxxx` and CLI ID looks like `sv_xxxx`.
+1. Open your volume in Unity XT Management UI (Unisphere), and take a note of volume-id. The `volume-id` looks like `csiunity-xxxxx` and CLI ID looks like `sv_xxxx`.
2. Create PersistentVolume and use this volume-id as a volumeHandle in the manifest. Modify other parameters according to your needs.
```yaml
@@ -106,8 +106,6 @@ In order to use Volume Snapshots, ensure the following components have been depl
### Volume Snapshot Class
-During the installation of the CSI Unity 2.0 driver and higher, a Volume Snapshot Class is not created and need to create Volume Snapshot Class.
-
Following is the manifest to create Volume Snapshot Class :
```yaml
@@ -146,7 +144,7 @@ status:
readyToUse: true
```
Note :
-For CSI Driver for Unity version 1.6 and later, `dell-csi-helm-installer` does not create any Volume Snapshot classes as part of the driver installation. A set of annotated volume snapshot class manifests have been provided in the `csi-unity/samples/volumesnapshotclass/` folder. Use these samples to create new Volume Snapshot to provision storage.
+A set of annotated volume snapshot class manifests have been provided in the [csi-unity/samples/volumesnapshotclass/](https://github.com/dell/csi-unity/tree/main/samples/volumesnapshotclass) folder. Use these samples to create new Volume Snapshot to provision storage.
### Creating PVCs with Volume Snapshots as Source
@@ -173,7 +171,7 @@ spec:
## Volume Expansion
-The CSI Unity driver version 1.3 and later supports the expansion of Persistent Volumes (PVs). This expansion can be done either online (for example, when a PVC is attached to a node) or offline (for example, when a PVC is not attached to any node).
+The CSI Unity XT driver supports the expansion of Persistent Volumes (PVs). This expansion can be done either online (for example, when a PVC is attached to a node) or offline (for example, when a PVC is not attached to any node).
To use this feature, the storage class that is used to create the PVC must have the attribute `allowVolumeExpansion` set to true.
@@ -215,7 +213,7 @@ spec:
## Raw block support
-The CSI Unity driver supports Raw Block Volumes.
+The CSI Unity XT driver supports Raw Block Volumes.
Raw Block volumes are created using the volumeDevices list in the pod template spec with each entry accessing a volumeClaimTemplate specifying a volumeMode: Block. The following is an example configuration:
```yaml
@@ -259,14 +257,14 @@ spec:
Access modes allowed are ReadWriteOnce and ReadWriteMany. Raw Block volumes are presented as a block device to the pod by using a bind mount to a block device in the node's file system. The driver does not format or check the format of any file system on the block device.
-Raw Block volumes support online Volume Expansion, but it is up to the application to manage to reconfigure the file system (if any) to the new size. Access mode ReadOnlyMany is not supported with raw block since we cannot restrict volumes to be readonly from Unity.
+Raw Block volumes support online Volume Expansion, but it is up to the application to manage and reconfigure the file system (if any) to the new size. Access mode ReadOnlyMany is not supported with raw block since we cannot restrict volumes to be readonly from Unity XT.
For additional information, see the [kubernetes](https://kubernetes.io/DOCS/CONCEPTS/STORAGE/PERSISTENT-VOLUMES/#volume-mode) website.
## Volume Cloning Feature
-The CSI Unity driver version 1.3 and later supports volume cloning. This allows specifying existing PVCs in the _dataSource_ field to indicate a user would like to clone a Volume.
+The CSI Unity XT driver supports volume cloning. This allows specifying existing PVCs in the _dataSource_ field to indicate a user would like to clone a Volume.
Source and destination PVC must be in the same namespace and have the same Storage Class.
@@ -310,11 +308,11 @@ spec:
## Ephemeral Inline Volume
-The CSI Unity driver supports ephemeral inline CSI volumes. This feature allows CSI volumes to be specified directly in the pod specification.
+The CSI Unity XT driver supports ephemeral inline CSI volumes. This feature allows CSI volumes to be specified directly in the pod specification.
At runtime, nested inline volumes follow the ephemeral lifecycle of their associated pods where the driver handles all phases of volume operations as pods are created and destroyed.
-The following is a sample manifest for creating ephemeral volume in pod manifest with CSI Unity driver.
+The following is a sample manifest for creating ephemeral volume in pod manifest with CSI Unity XT driver.
```yaml
kind: Pod
@@ -361,9 +359,9 @@ To create `NFS` volume you need to provide `nasName:` parameters that point to t
## Controller HA
-The CSI Unity driver supports controller HA feature. Instead of StatefulSet controller pods deployed as a Deployment.
+The CSI Unity XT driver supports controller HA feature. Instead of StatefulSet controller pods deployed as a Deployment.
-By default, number of replicas is set to 2, you can set the `controllerCount` parameter to 1 in `myvalues.yaml` if you want to disable controller HA for your installation. When installing via Operator you can change the `replicas` parameter in the `spec.driver` section in your Unity Custom Resource.
+By default, the number of replicas is set to 2. You can set the controllerCount parameter to 1 in myvalues.yaml if you want to disable controller HA for your installation. When installing via Operator, you can change the replicas parameter in the spec.driver section in your Unity XT Custom Resource.
When multiple replicas of controller pods are in a cluster each sidecar (Attacher, Provisioner, Resizer, and Snapshotter) tries to get a lease so only one instance of each sidecar is active in the cluster at a time.
@@ -407,7 +405,7 @@ As said before you can configure where node driver pods would be assigned in a s
## Topology
-The CSI Unity driver supports Topology which forces volumes to be placed on worker nodes that have connectivity to the backend storage. This covers use cases where users have chosen to restrict the nodes on which the CSI driver is deployed.
+The CSI Unity XT driver supports Topology which forces volumes to be placed on worker nodes that have connectivity to the backend storage. This covers use cases where users have chosen to restrict the nodes on which the CSI driver is deployed.
This Topology support does not include customer-defined topology, users cannot create their own labels for nodes, they should use whatever labels are returned by the driver and applied automatically by Kubernetes on its nodes.
@@ -433,7 +431,7 @@ allowedTopologies:
- "true"
```
-This example matches all nodes where the driver has a connection to the Unity array with array ID mentioned via Fiber Channel. Similarly, by replacing `fc` with `iscsi` in the key checks for iSCSI connectivity with the node.
+This example matches all nodes where the driver has a connection to the Unity XT array with array ID mentioned via Fiber Channel. Similarly, by replacing `fc` with `iscsi` in the key checks for iSCSI connectivity with the node.
You can check what labels your nodes contain by running `kubectl get nodes --show-labels` command.
@@ -442,7 +440,7 @@ You can check what labels your nodes contain by running `kubectl get nodes --sho
For any additional information about the topology, see the [Kubernetes Topology documentation](https://kubernetes-csi.github.io/docs/topology.html).
## Volume Limit
-The CSI Driver for Dell Unity allows users to specify the maximum number of Unity volumes that can be used in a node.
+The CSI Driver for Dell Unity XT allows users to specify the maximum number of Unity XT volumes that can be used in a node.
The user can set the volume limit for a node by creating a node label `max-unity-volumes-per-node` and specifying the volume limit for that node.
`kubectl label node max-unity-volumes-per-node=`
@@ -452,12 +450,12 @@ The user can also set the volume limit for all the nodes in the cluster by speci
>**NOTE:**
To reflect the changes after setting the value either via node label or in values.yaml file, user has to bounce the driver controller and node pods using the command `kubectl get pods -n unity --no-headers=true | awk '/unity-/{print $1}'| xargs kubectl delete -n unity pod`.
If the value is set both by node label and values.yaml file then node label value will get the precedence and user has to remove the node label in order to reflect the values.yaml value.
The default value of `maxUnityVolumesPerNode` is 0.
If `maxUnityVolumesPerNode` is set to zero, then Container Orchestration decides how many volumes of this type can be published by the controller to the node.
The volume limit specified to `maxUnityVolumesPerNode` attribute is applicable to all the nodes in the cluster for which node label `max-unity-volumes-per-node` is not set.
## NAT Support
-CSI Driver for Dell Unity is supported in the NAT environment for NFS protocol.
+CSI Driver for Dell Unity XT is supported in the NAT environment for NFS protocol.
The user will be able to install the driver and able to create pods.
## Single Pod Access Mode for PersistentVolumes
-CSI Driver for Unity supports a new accessmode `ReadWriteOncePod` for PersistentVolumes and PersistentVolumeClaims. With this feature, CSI Driver for Unity allows to restrict volume access to a single pod in the cluster
+CSI Driver for Unity XT supports a new accessmode `ReadWriteOncePod` for PersistentVolumes and PersistentVolumeClaims. With this feature, CSI Driver for Unity XT restricts volume access to a single pod in the cluster
Prerequisites
1. Enable the ReadWriteOncePod feature gate for kube-apiserver, kube-scheduler, and kubelet as the ReadWriteOncePod access mode is in alpha for Kubernetes v1.22 and is only supported for CSI volumes. You can enable the feature by setting command line arguments:
@@ -477,14 +475,13 @@ spec:
```
## Volume Health Monitoring
-CSI Driver for Unity supports volume health monitoring. This is an alpha feature and requires feature gate to be enabled by setting command line arguments `--feature-gates="...,CSIVolumeHealth=true"`.
+CSI Driver for Unity XT supports volume health monitoring. This is an alpha feature and requires feature gate to be enabled by setting command line arguments `--feature-gates="...,CSIVolumeHealth=true"`.
This feature:
1. Reports on the condition of the underlying volumes via events when a volume condition is abnormal. We can watch the events on the describe of pvc `kubectl describe pvc -n `
2. Collects the volume stats. We can see the volume usage in the node logs `kubectl logs -n -c driver`
-By default this is disabled in CSI Driver for Unity. You will have to set the `healthMonitor.enable` flag for controller, node or for both in `values.yaml` to get the volume stats and volume condition.
+By default this is disabled in CSI Driver for Unity XT. You will have to set the `healthMonitor.enable` flag for controller, node or for both in `values.yaml` to get the volume stats and volume condition.
## Dynamic Logging Configuration
-This feature is introduced in CSI Driver for unity version 2.0.0.
### Helm based installation
As part of driver installation, a ConfigMap with the name `unity-config-params` is created, which contains an attribute `CSI_LOG_LEVEL` which specifies the current log level of CSI driver.
@@ -508,13 +505,11 @@ To update the log level dynamically user has to edit the ConfigMap `unity-config
kubectl edit configmap -n unity unity-config-params
```
->Note: Prior to CSI Driver for unity version 2.0.0, the log level was allowed to be updated dynamically through `logLevel` attribute in the secret object.
-
-## Tenancy support for Unity NFS
+## Tenancy support for Unity XT NFS
-The CSI Unity driver version 2.1.0 (and later versions) supports the Tenancy feature of Unity such that the user will be able to associate specific worker nodes (in the cluster) and NFS storage volumes with Tenant.
+The CSI Unity XT driver version v2.1.0 (and later versions) supports the Tenancy feature of Unity XT such that the user will be able to associate specific worker nodes (in the cluster) and NFS storage volumes with Tenant.
-Prerequisites (to be manually created in Unity Array) before the driver installation:
+Prerequisites (to be manually created in Unity XT Array) before the driver installation:
* Create Tenants
* Create Pools
* Create NAS Servers with Tenant and Pool mapping
@@ -634,4 +629,4 @@ data:
SYNC_NODE_INFO_TIME_INTERVAL: "15"
TENANT_NAME: ""
```
->Note: csi-unity supports Tenancy in multi-array setup, provided the TenantName is the same across Unity instances.
+>Note: csi-unity supports Tenancy in multi-array setup, provided the TenantName is the same across Unity XT instances.
diff --git a/content/v1/csidriver/installation/helm/isilon.md b/content/v1/csidriver/installation/helm/isilon.md
index 08d51943eb..d1ba801503 100644
--- a/content/v1/csidriver/installation/helm/isilon.md
+++ b/content/v1/csidriver/installation/helm/isilon.md
@@ -25,6 +25,7 @@ The following are requirements to be met before installing the CSI Driver for De
- If using Snapshot feature, satisfy all Volume Snapshot requirements
- If enabling CSM for Authorization, please refer to the [Authorization deployment steps](../../../../authorization/deployment/) first
- If enabling CSM for Replication, please refer to the [Replication deployment steps](../../../../replication/deployment/) first
+- If enabling CSM for Resiliency, please refer to the [Resiliency deployment steps](../../../../resiliency/deployment/) first
### Install Helm 3.0
@@ -120,7 +121,7 @@ CRDs should be configured during replication prepare stage with repctl as descri
## Install the Driver
**Steps**
-1. Run `git clone -b v2.2.0 https://github.com/dell/csi-powerscale.git` to clone the git repository.
+1. Run `git clone -b v2.3.0 https://github.com/dell/csi-powerscale.git` to clone the git repository.
2. Ensure that you have created the namespace where you want to install the driver. You can run `kubectl create namespace isilon` to create a new one. The use of "isilon" as the namespace is just an example. You can choose any name for the namespace.
3. Collect information from the PowerScale Systems like IP address, IsiPath, username, and password. Make a note of the value for these parameters as they must be entered in the *secret.yaml*.
4. Copy *the helm/csi-isilon/values.yaml* into a new location with name say *my-isilon-settings.yaml*, to customize settings for installation.
@@ -139,6 +140,8 @@ CRDs should be configured during replication prepare stage with repctl as descri
| kubeletConfigDir | Specify kubelet config dir path | Yes | "/var/lib/kubelet" |
| enableCustomTopology | Indicates PowerScale FQDN/IP which will be fetched from node label and the same will be used by controller and node pod to establish a connection to Array. This requires enableCustomTopology to be enabled. | No | false |
| fsGroupPolicy | Defines which FS Group policy mode to be used, Supported modes `None, File and ReadWriteOnceWithFSType` | No | "ReadWriteOnceWithFSType" |
+ | podmonAPIPort | Defines the port which csi-driver will use within the cluster to support podmon | No | 8083 |
+ | maxPathLen | Defines the maximum length of path for a volume | No | 192 |
| ***controller*** | Configure controller pod specific parameters | | |
| controllerCount | Defines the number of csi-powerscale controller pods to deploy to the Kubernetes release| Yes | 2 |
| volumeNamePrefix | Defines a string prefix for the names of PersistentVolumes created | Yes | "k8s" |
@@ -171,6 +174,9 @@ CRDs should be configured during replication prepare stage with repctl as descri
| sidecarProxyImage | Image for csm-authorization-sidecar. | No | " " |
| proxyHost | Hostname of the csm-authorization server. | No | Empty |
| skipCertificateValidation | A boolean that enables/disables certificate validation of the csm-authorization server. | No | true |
+ | **podmon** | Podmon is an optional feature under development and tech preview. Enable this feature only after contact support for additional information. | - | - |
+ | enabled | A boolean that enable/disable podmon feature. | No | false |
+ | image | image for podmon. | No | " " |
*NOTE:*
@@ -261,7 +267,7 @@ The CSI driver for Dell PowerScale version 1.5 and later, `dell-csi-helm-install
### What happens to my existing storage classes?
-*Upgrading from CSI PowerScale v2.1 driver*:
+*Upgrading from CSI PowerScale v2.2 driver*:
The storage classes created as part of the installation have an annotation - "helm.sh/resource-policy": keep set. This ensures that even after an uninstall or upgrade, the storage classes are not deleted. You can continue using these storage classes if you wish so.
*NOTE*:
@@ -283,7 +289,7 @@ Starting CSI PowerScale v1.6, `dell-csi-helm-installer` will not create any Volu
### What happens to my existing Volume Snapshot Classes?
-*Upgrading from CSI PowerScale v2.1 driver*:
+*Upgrading from CSI PowerScale v2.2 driver*:
The existing volume snapshot class will be retained.
*Upgrading from an older version of the driver*:
diff --git a/content/v1/csidriver/installation/helm/powerflex.md b/content/v1/csidriver/installation/helm/powerflex.md
index 9bdb0ccdc0..c021fb43e9 100644
--- a/content/v1/csidriver/installation/helm/powerflex.md
+++ b/content/v1/csidriver/installation/helm/powerflex.md
@@ -29,6 +29,7 @@ The following are requirements that must be met before installing the CSI Driver
- If using Snapshot feature, satisfy all Volume Snapshot requirements
- A user must exist on the array with a role _>= FrontEndConfigure_
- If enabling CSM for Authorization, please refer to the [Authorization deployment steps](../../../../authorization/deployment/) first
+- If multipath is configured, ensure CSI-PowerFlex volumes are blacklisted by multipathd. See [troubleshooting section](../../../troubleshooting/powerflex.md) for details
### Install Helm 3.0
@@ -109,7 +110,7 @@ kubectl -n kube-system kustomize deploy/kubernetes/snapshot-controller | kubectl
## Install the Driver
**Steps**
-1. Run `git clone -b v2.2.0 https://github.com/dell/csi-powerflex.git` to clone the git repository.
+1. Run `git clone -b v2.3.0 https://github.com/dell/csi-powerflex.git` to clone the git repository.
2. Ensure that you have created a namespace where you want to install the driver. You can run `kubectl create namespace vxflexos` to create a new one.
@@ -130,61 +131,36 @@ kubectl -n kube-system kustomize deploy/kubernetes/snapshot-controller | kubectl
Example: `samples/config.yaml`
- ```yaml
- # Username for accessing PowerFlex system.
- # If authorization is enabled, username will be ignored.
- - username: "admin"
- # Password for accessing PowerFlex system.
- # If authorization is enabled, password will be ignored.
- password: "password"
- # System name/ID of PowerFlex system.
- systemID: "ID1"
- # Previous names of PowerFlex system if used for PV.
- allSystemNames: "pflex-1,pflex-2"
- # REST API gateway HTTPS endpoint for PowerFlex system.
- # If authorization is enabled, endpoint should be the HTTPS localhost endpoint that
- # the authorization sidecar will listen on
- endpoint: "https://127.0.0.1"
- # Determines if the driver is going to validate certs while connecting to PowerFlex REST API interface.
- # Allowed values: true or false
- # Default value: true
- skipCertificateValidation: true
- # indicates if this array is the default array
- # needed for backwards compatibility
- # only one array is allowed to have this set to true
- # Default value: false
- isDefault: true
- # defines the MDM(s) that SDC should register with on start.
- # Allowed values: a list of IP addresses or hostnames separated by comma.
- # Default value: none
- mdm: "10.0.0.1,10.0.0.2"
- - username: "admin"
- password: "Password123"
- systemID: "ID2"
- endpoint: "https://127.0.0.2"
- skipCertificateValidation: true
- mdm: "10.0.0.3,10.0.0.4"
- ```
-
- After editing the file, run the following command to create a secret called `vxflexos-config`:
+```yaml
+- username: "admin"
+ password: "Password123"
+ systemID: "ID2"
+ endpoint: "https://127.0.0.2"
+ skipCertificateValidation: true
+ isDefault: true
+ mdm: "10.0.0.3,10.0.0.4"
+```
+ *NOTE: To use multiple arrays, copy and paste section above for each array. Make sure isDefault is set to true for only one array.*
+
+After editing the file, run the below command to create a secret called `vxflexos-config`:
`kubectl create secret generic vxflexos-config -n vxflexos --from-file=config=samples/config.yaml`
- Use the following command to replace or update the secret:
+Use the below command to replace or update the secret:
`kubectl create secret generic vxflexos-config -n vxflexos --from-file=config=samples/config.yaml -o yaml --dry-run=client | kubectl replace -f -`
- *NOTE:*
+*NOTE:*
- - The user needs to validate the YAML syntax and array-related key/values while replacing the vxflexos-creds secret.
- - If you want to create a new array or update the MDM values in the secret, you will need to reinstall the driver. If you change other details, such as login information, the secret will dynamically update -- see [dynamic-array-configuration](../../../features/powerflex#dynamic-array-configuration) for more details.
- - Old `json` format of the array configuration file is still supported in this release. If you already have your configuration in `json` format, you may continue to maintain it or you may transfer this configuration to `yaml`
- format and replace/update the secret.
- - "insecure" parameter has been changed to "skipCertificateValidation" as insecure is deprecated and will be removed from use in config.yaml or secret.yaml in a future release. Users can continue to use any one of "insecure" or "skipCertificateValidation" for now. The driver would return an error if both parameters are used.
- - Please note that log configuration parameters from v1.5 will no longer work in v2.0 and higher. Please refer to the [Dynamic Logging Configuration](../../../features/powerflex#dynamic-logging-configuration) section in Features for more information.
- - If the user is using complex K8s version like "v1.21.3-mirantis-1", use below kubeVersion check in helm/csi-unity/Chart.yaml file.
+- The user needs to validate the YAML syntax and array-related key/values while replacing the vxflexos-creds secret.
+- If you want to create a new array or update the MDM values in the secret, you will need to reinstall the driver. If you change other details, such as login information, the secret will dynamically update -- see [dynamic-array-configuration](../../../features/powerflex#dynamic-array-configuration) for more details.
+- Old `json` format of the array configuration file is still supported in this release. If you already have your configuration in `json` format, you may continue to maintain it or you may transfer this configuration to `yaml`format and replace/update the secret.
+- "insecure" parameter has been changed to "skipCertificateValidation" as insecure is deprecated and will be removed from use in config.yaml or secret.yaml in a future release. Users can continue to use any one of "insecure" or "skipCertificateValidation" for now. The driver would return an error if both parameters are used.
+- Please note that log configuration parameters from v1.5 will no longer work in v2.0 and higher. Please refer to the [Dynamic Logging Configuration](../../../features/powerflex#dynamic-logging-configuration) section in Features for more information.
+- If the user is using complex K8s version like "v1.21.3-mirantis-1", use this kubeVersion check in helm/csi-unity/Chart.yaml file.
kubeVersion: ">= 1.21.0-0 < 1.24.0-0"
-
+
+
5. Default logging options are set during Helm install. To see possible configuration options, see the [Dynamic Logging Configuration](../../../features/powerflex#dynamic-logging-configuration) section in Features.
6. If using automated SDC deployment:
@@ -206,6 +182,7 @@ kubectl -n kube-system kustomize deploy/kubernetes/snapshot-controller | kubectl
| logFormat | CSI driver log format. Allowed values: "TEXT" or "JSON". | Yes | "TEXT" |
| kubeletConfigDir | kubelet config directory path. Ensure that the config.yaml file is present at this path. | Yes | /var/lib/kubelet |
| defaultFsType | Used to set the default FS type which will be used for mount volumes if FsType is not specified in the storage class. Allowed values: ext4, xfs. | Yes | ext4 |
+| fsGroupPolicy | Defines which FS Group policy mode to be used. Supported modes are`None, File, and ReadWriteOnceWithFSType.` | No | "ReadWriteOnceWithFSType" |
| imagePullPolicy | Policy to determine if the image should be pulled prior to starting the container. Allowed values: Always, IfNotPresent, Never. | Yes | IfNotPresent |
| enablesnapshotcgdelete | A boolean that, when enabled, will delete all snapshots in a consistency group everytime a snap in the group is deleted. | Yes | false |
| enablelistvolumesnapshot | A boolean that, when enabled, will allow list volume operation to include snapshots (since creating a volume from a snap actually results in a new snap). It is recommend this be false unless instructed otherwise. | Yes | false |
@@ -221,14 +198,13 @@ kubectl -n kube-system kustomize deploy/kubernetes/snapshot-controller | kubectl
| enabled | Enable/Disable deployment of external health monitor sidecar. | No | false |
| volumeHealthMonitorInterval | Interval of monitoring volume health condition. Allowed values: Number followed by unit (s,m,h)| No | 60s |
| **node** | This section allows the configuration of node-specific parameters. | - | - |
+| healthMonitor.enabled | Enable/Disable health monitor of CSI volumes- volume usage, volume condition | No | false |
| nodeSelector | Defines what nodes would be selected for pods of node daemonset. Leave as blank to use all nodes. | Yes | " " |
| tolerations | Defines tolerations that would be applied to node daemonset. Leave as blank to install node driver only on worker nodes. | Yes | " " |
| **monitor** | This section allows the configuration of the SDC monitoring pod. | - | - |
| enabled | Set to enable the usage of the monitoring pod. | Yes | false |
| hostNetwork | Set whether the monitor pod should run on the host network or not. | Yes | true |
| hostPID | Set whether the monitor pod should run in the host namespace or not. | Yes | true |
-| **healthMonitor** | This section configures node side volume health monitoring | - | -|
-| enabled| Enable/Disable health monitor of CSI volumes- volume usage, volume condition | No | false |
| **vgsnapshotter** | This section allows the configuration of the volume group snapshotter(vgsnapshotter) pod. | - | - |
| enabled | A boolean that enable/disable vg snapshotter feature. | No | false |
| image | Image for vg snapshotter. | No | " " |
@@ -338,8 +314,8 @@ Starting CSI PowerFlex v1.5, `dell-csi-helm-installer` will not create any Volum
### What happens to my existing Volume Snapshot Classes?
-*Upgrading from CSI PowerFlex v2.1 driver*:
+*Upgrading from CSI PowerFlex v2.2 driver*:
The existing volume snapshot class will be retained.
*Upgrading from an older version of the driver*:
-It is strongly recommended to upgrade the earlier versions of CSI PowerFlex to 1.5 or higher, before upgrading to 2.2.
+It is strongly recommended to upgrade the earlier versions of CSI PowerFlex to 1.5 or higher, before upgrading to 2.3.
diff --git a/content/v1/csidriver/installation/helm/powermax.md b/content/v1/csidriver/installation/helm/powermax.md
index ef8882ce05..d63d770012 100644
--- a/content/v1/csidriver/installation/helm/powermax.md
+++ b/content/v1/csidriver/installation/helm/powermax.md
@@ -162,7 +162,7 @@ CRDs should be configured during replication prepare stage with repctl as descri
**Steps**
-1. Run `git clone -b v2.2.0 https://github.com/dell/csi-powermax.git` to clone the git repository. This will include the Helm charts and dell-csi-helm-installer scripts.
+1. Run `git clone -b v2.3.0 https://github.com/dell/csi-powermax.git` to clone the git repository. This will include the Helm charts and dell-csi-helm-installer scripts.
2. Ensure that you have created a namespace where you want to install the driver. You can run `kubectl create namespace powermax` to create a new one
3. Edit the `samples/secret/secret.yaml file, point to the correct namespace, and replace the values for the username and password parameters.
These values can be obtained using base64 encoding as described in the following example:
@@ -178,16 +178,40 @@ CRDs should be configured during replication prepare stage with repctl as descri
| Parameter | Description | Required | Default |
|-----------|--------------|------------|----------|
+| **global**| This section refers to configuration options for both CSI PowerMax Driver and Reverse Proxy | - | - |
+|defaultCredentialsSecret| This secret name refers to:
1. The Unisphere credentials if the driver is installed without proxy or with proxy in Linked mode.
2. The proxy credentials if the driver is installed with proxy in StandAlone mode.
3. The default Unisphere credentials if credentialsSecret is not specified for a management server.| Yes | powermax-creds |
+| storageArrays| This section refers to the list of arrays managed by the driver and Reverse Proxy in StandAlone mode.| - | - |
+| storageArrayId | This refers to PowerMax Symmetrix ID.| Yes | 000000000001|
+| endpoint | This refers to the URL of the Unisphere server managing _storageArrayId_. If authorization is enabled, endpoint should be the HTTPS localhost endpoint that the authorization sidecar will listen on| Yes if Reverse Proxy mode is _StandAlone_ | https://primary-1.unisphe.re:8443 |
+| backupEndpoint | This refers to the URL of the backup Unisphere server managing _storageArrayId_, if Reverse Proxy is installed in _StandAlone_ mode. If authorization is enabled, backupEndpoint should be the HTTPS localhost endpoint that the authorization sidecar will listen on| No | https://backup-1.unisphe.re:8443 |
+| managementServers | This section refers to the list of configurations for Unisphere servers managing powermax arrays.| - | - |
+| endpoint | This refers to the URL of the Unisphere server. If authorization is enabled, endpoint should be the HTTPS localhost endpoint that the authorization sidecar will listen on | Yes | https://primary-1.unisphe.re:8443 |
+| credentialsSecret| This refers to the user credentials for _endpoint_ | No| primary-1-secret|
+| skipCertificateValidation | This parameter should be set to false if you want to do client-side TLS verification of Unisphere for PowerMax SSL certificates.| No | "True" |
+| certSecret | The name of the secret in the same namespace containing the CA certificates of the Unisphere server | Yes, if skipCertificateValidation is set to false | Empty|
+| limits | This refers to various limits for Reverse Proxy | No | - |
+| maxActiveRead | This refers to the maximum concurrent READ request handled by the reverse proxy.| No | 5 |
+| maxActiveWrite | This refers to the maximum concurrent WRITE request handled by the reverse proxy.| No | 4 |
+| maxOutStandingRead | This refers to maximum queued READ request when reverse proxy receives more than _maxActiveRead_ requests. | No | 50 |
+| maxOutStandingWrite| This refers to maximum queued WRITE request when reverse proxy receives more than _maxActiveWrite_ requests.| No | 50 |
| kubeletConfigDir | Specify kubelet config dir path | Yes | /var/lib/kubelet |
| imagePullPolicy | The default pull policy is IfNotPresent which causes the Kubelet to skip pulling an image if it already exists. | Yes | IfNotPresent |
| clusterPrefix | Prefix that is used during the creation of various masking-related entities (Storage Groups, Masking Views, Hosts, and Volume Identifiers) on the array. The value that you specify here must be unique. Ensure that no other CSI PowerMax driver is managing the same arrays that are configured with the same prefix. The maximum length for this prefix is three characters. | Yes | "ABC" |
+| logLevel | CSI driver log level. Allowed values: "error", "warn"/"warning", "info", "debug". | Yes | "debug" |
+| logFormat | CSI driver log format. Allowed values: "TEXT" or "JSON". | Yes | "TEXT" |
+| kubeletConfigDir | kubelet config directory path. Ensure that the config.yaml file is present at this path. | Yes | /var/lib/kubelet |
| defaultFsType | Used to set the default FS type for external provisioner | Yes | ext4 |
| portGroups | List of comma-separated port group names. Any port group that is specified here must be present on all the arrays that the driver manages. | For iSCSI Only | "PortGroup1, PortGroup2, PortGroup3" |
-| storageResourcePool | This parameter must mention one of the SRPs on the PowerMax array that the symmetrixID specifies. This value is used to create the default storage class. | Yes| "SRP_1" |
-| serviceLevel | This parameter must mention one of the Service Levels on the PowerMax array. This value is used to create the default storage class. | Yes| "Bronze" |
| skipCertificateValidation | Skip client-side TLS verification of Unisphere certificates | No | "True" |
| transportProtocol | Set the preferred transport protocol for the Kubernetes cluster which helps the driver choose between FC and iSCSI when a node has both FC and iSCSI connectivity to a PowerMax array.| No | Empty|
| nodeNameTemplate | Used to specify a template that will be used by the driver to create Host/IG names on the PowerMax array. To use the default naming convention, leave this value empty. | No | Empty|
+| modifyHostName | Change any existing host names. When nodenametemplate is set, it changes the name to the specified format else it uses driver default host name format. | No | false |
+| powerMaxDebug | Enables low level and http traffic logging between the CSI driver and Unisphere. Don't enable this unless asked to do so by the support team. | No | false |
+| enableCHAP | Determine if the driver is going to configure SCSI node databases on the nodes with the CHAP credentials. If enabled, the CHAP secret must be provided in the credentials secret and set to the key "chapsecret" | No | false |
+| fsGroupPolicy | Defines which FS Group policy mode to be used, Supported modes `None, File and ReadWriteOnceWithFSType` | No | "ReadWriteOnceWithFSType" |
+| version | Current version of the driver. Don't modify this value as this value will be used by the install script. | Yes | v2.3.0 |
+| images | Defines the container images used by the driver. | - | - |
+| driverRepository | Defines the registry of the container image used for the driver. | Yes | dellemc |
| **controller** | Allows configuration of the controller-specific parameters.| - | - |
| controllerCount | Defines the number of csi-powerscale controller pods to deploy to the Kubernetes release| Yes | 2 |
| volumeNamePrefix | Defines a string prefix for the names of PersistentVolumes created | Yes | "k8s" |
@@ -202,25 +226,10 @@ CRDs should be configured during replication prepare stage with repctl as descri
| tolerations | Add tolerations as per requirement | No | - |
| nodeSelector | Add node selectors as per requirement | No | - |
| healthMonitor.enabled | Allows to enable/disable volume health monitor | No | false |
-| **global**| This section refers to configuration options for both CSI PowerMax Driver and Reverse Proxy | - | - |
-|defaultCredentialsSecret| This secret name refers to:
1. The Unisphere credentials if the driver is installed without proxy or with proxy in Linked mode.
2. The proxy credentials if the driver is installed with proxy in StandAlone mode.
3. The default Unisphere credentials if credentialsSecret is not specified for a management server.| Yes | powermax-creds |
-| storageArrays| This section refers to the list of arrays managed by the driver and Reverse Proxy in StandAlone mode.| - | - |
-| storageArrayId | This refers to PowerMax Symmetrix ID.| Yes | 000000000001|
-| endpoint | This refers to the URL of the Unisphere server managing _storageArrayId_. If authorization is enabled, endpoint should be the HTTPS localhost endpoint that the authorization sidecar will listen on| Yes if Reverse Proxy mode is _StandAlone_ | https://primary-1.unisphe.re:8443 |
-| backupEndpoint | This refers to the URL of the backup Unisphere server managing _storageArrayId_, if Reverse Proxy is installed in _StandAlone_ mode. If authorization is enabled, backupEndpoint should be the HTTPS localhost endpoint that the authorization sidecar will listen on| No | https://backup-1.unisphe.re:8443 |
-| managementServers | This section refers to the list of configurations for Unisphere servers managing powermax arrays.| - | - |
-| endpoint | This refers to the URL of the Unisphere server. If authorization is enabled, endpoint should be the HTTPS localhost endpoint that the authorization sidecar will listen on | Yes | https://primary-1.unisphe.re:8443 |
-| credentialsSecret| This refers to the user credentials for _endpoint_ | No| primary-1-secret|
-| skipCertificateValidation | This parameter should be set to false if you want to do client-side TLS verification of Unisphere for PowerMax SSL certificates.| No | "True" |
-| certSecret | The name of the secret in the same namespace containing the CA certificates of the Unisphere server | Yes, if skipCertificateValidation is set to false | Empty|
-| limits | This refers to various limits for Reverse Proxy | No | - |
-| maxActiveRead | This refers to the maximum concurrent READ request handled by the reverse proxy.| No | 5 |
-| maxActiveWrite | This refers to the maximum concurrent WRITE request handled by the reverse proxy.| No | 4 |
-| maxOutStandingRead | This refers to maximum queued READ request when reverse proxy receives more than _maxActiveRead_ requests. | No | 50 |
-| maxOutStandingWrite| This refers to maximum queued WRITE request when reverse proxy receives more than _maxActiveWrite_ requests.| No | 50 |
+| topologyControl.enabled | Allows to enable/disable topology control to filter topology keys | No | false |
| **csireverseproxy**| This section refers to the configuration options for CSI PowerMax Reverse Proxy | - | - |
| enabled | Boolean parameter which indicates if CSI PowerMax Reverse Proxy is going to be configured and installed.
**NOTE:** If not enabled, then there is no requirement to configure any of the following values. | No | "False" |
-| image | This refers to the image of the CSI Powermax Reverse Proxy container. | Yes | dellemc/csipowermax-reverseproxy:v1.4.0 |
+| image | This refers to the image of the CSI Powermax Reverse Proxy container. | Yes | dellemc/csipowermax-reverseproxy:v2.1.0 |
| tlsSecret | This refers to the TLS secret of the Reverse Proxy Server.| Yes | csirevproxy-tls-secret |
| deployAsSidecar | If set to _true_, the Reverse Proxy is installed as a sidecar to the driver's controller pod otherwise it is installed as a separate deployment.| Yes | "True" |
| port | Specify the port number that is used by the NodePort service created by the CSI PowerMax Reverse Proxy installation| Yes | 2222 |
@@ -230,14 +239,29 @@ CRDs should be configured during replication prepare stage with repctl as descri
| sidecarProxyImage | Image for csm-authorization-sidecar. | No | " " |
| proxyHost | Hostname of the csm-authorization server. | No | Empty |
| skipCertificateValidation | A boolean that enables/disables certificate validation of the csm-authorization server. | No | true |
+| **migration** | [Migration](../../../../replication/migrating-volumes) is an optional feature to enable migration between storage classes | - | - |
+| enabled | A boolean that enables/disables migration feature. | No | false |
+| image | Image for dell-csi-migrator sidecar. | No | " " |
+| migrationPrefix | enables migration sidecar to read required information from the storage class fields | No | migration.storage.dell.com |
+| **replication** | [Replication](../../../../replication/deployment) is an optional feature to enable replication & disaster recovery capabilities of PowerMax to Kubernetes clusters.| - | - |
+| enabled | A boolean that enables/disables replication feature. | No | false |
+| image | Image for dell-csi-replicator sidecar. | No | " " |
+| replicationContextPrefix | enables side cars to read required information from the volume context | No | powermax |
+| replicationPrefix | Determine if replication is enabled | No | replication.storage.dell.com |
8. Install the driver using `csi-install.sh` bash script by running `cd ../dell-csi-helm-installer && ./csi-install.sh --namespace powermax --values ../helm/my-powermax-settings.yaml`
+9. Or you can also install the driver using standalone helm chart using the command `helm install --values my-powermax-settings.yaml --namespace powermax powermax ./csi-powermax`
*Note:*
- For detailed instructions on how to run the install scripts, see the readme document in the dell-csi-helm-installer folder.
- There are a set of samples provided [here](#sample-values-file) to help you configure the driver with reverse proxy
- This script also runs the verify.sh script in the same directory. You will be prompted to enter the credentials for each of the Kubernetes nodes. The `verify.sh` script needs the credentials to check if the iSCSI initiators have been configured on all nodes. You can also skip the verification step by specifying the `--skip-verify-node` option
- In order to enable authorization, there should be an authorization proxy server already installed.
+- PowerMax Array username must have role as `StorageAdmin` to be able to perform CRUD operations.
+- If the user is using complex K8s version like “v1.22.3-mirantis-1”, use below kubeVersion check in [helm Chart](https://github.com/dell/csi-powermax/blob/main/helm/csi-powermax/Chart.yaml) file. kubeVersion: “>= 1.22.0-0 < 1.25.0-0”.
+- User should provide all boolean values with double-quotes. This applies only for values.yaml. Example: “true”/“false”.
+- controllerCount parameter value should be <= number of nodes in the kubernetes cluster else install script fails.
+- Endpoint should not have any special character at the end apart from port number.
## Storage Classes
@@ -251,15 +275,15 @@ Upgrading from an older version of the driver: The storage classes will be delet
## Volume Snapshot Class
-Starting with CSI PowerMax v1.7, `dell-csi-helm-installer` will not create any Volume Snapshot Class during the driver installation. There is a sample Volume Snapshot Class manifest present in the _samples/volumesnapshotclass_ folder. Please use this sample to create a new Volume Snapshot Class to create Volume Snapshots.
+Starting with CSI PowerMax v1.7.0, `dell-csi-helm-installer` will not create any Volume Snapshot Class during the driver installation. There is a sample Volume Snapshot Class manifest present in the _samples/volumesnapshotclass_ folder. Please use this sample to create a new Volume Snapshot Class to create Volume Snapshots.
### What happens to my existing Volume Snapshot Classes?
-*Upgrading from CSI PowerMax v2.1 driver*:
+*Upgrading from CSI PowerMax v2.1.0 driver*:
The existing volume snapshot class will be retained.
*Upgrading from an older version of the driver*:
-It is strongly recommended to upgrade the earlier versions of CSI PowerMax to 1.7 or higher, before upgrading to 2.2.
+It is strongly recommended to upgrade the earlier versions of CSI PowerMax to 1.7.0 or higher, before upgrading to 2.3.0.
## Sample values file
The following sections have useful snippets from `values.yaml` file which provides more information on how to configure the CSI PowerMax driver along with CSI PowerMax ReverseProxy in various modes
diff --git a/content/v1/csidriver/installation/helm/powerstore.md b/content/v1/csidriver/installation/helm/powerstore.md
index 7b009d83a4..858b0385db 100644
--- a/content/v1/csidriver/installation/helm/powerstore.md
+++ b/content/v1/csidriver/installation/helm/powerstore.md
@@ -62,18 +62,25 @@ To do this, run the `systemctl enable --now iscsid` command.
For information about configuring iSCSI, see _Dell PowerStore documentation_ on Dell Support.
-### Set up the NVMe/TCP Initiator
+### Set up the NVMe Initiator
-If you want to use the protocol, set up the NVMe/TCP initiators as follows:
+If you want to use the protocol, set up the NVMe initiators as follows:
- The driver requires NVMe management command-line interface (nvme-cli) to use configure, edit, view or start the NVMe client and target. The nvme-cli utility provides a command-line and interactive shell option. The NVMe CLI tool is installed in the host using the below command.
`sudo apt install nvme-cli`
+**Requirements for NVMeTCP**
- Modules including the nvme, nvme_core, nvme_fabrics, and nvme_tcp are required for using NVMe over Fabrics using TCP. Load the NVMe and NVMe-OF Modules using the below commands:
```bash
modprobe nvme
modprobe nvme_tcp
```
+**Requirements for NVMeFC**
+- NVMeFC Zoning of the Host Bus Adapters (HBAs) to the Fibre Channel port must be done.
+
+*NOTE:*
+- Do not load the nvme_tcp module for NVMeFC
+
### Linux multipathing requirements
Dell PowerStore supports Linux multipathing. Configure Linux multipathing before installing the CSI Driver for Dell
PowerStore.
@@ -110,7 +117,21 @@ Use [v5.0.x](https://github.com/kubernetes-csi/external-snapshotter/tree/v5.0.1/
- [quay.io/k8scsi/csi-snapshotter:v4.0.x](https://quay.io/repository/k8scsi/csi-snapshotter?tag=v4.0.0&tab=tags)
- The CSI external-snapshotter sidecar is still installed along with the driver and does not involve any extra configuration.
-## Volume Health Monitoring
+#### Installation example
+
+You can install CRDs and default snapshot controller by running these commands:
+```bash
+git clone https://github.com/kubernetes-csi/external-snapshotter/
+cd ./external-snapshotter
+git checkout release-
+kubectl kustomize client/config/crd | kubectl create -f -
+kubectl -n kube-system kustomize deploy/kubernetes/snapshot-controller | kubectl create -f -
+```
+
+*NOTE:*
+- It is recommended to use 5.0.x version of snapshotter/snapshot-controller.
+
+### Volume Health Monitoring
Volume Health Monitoring feature is optional and by default this feature is disabled for drivers when installed via helm.
To enable this feature, add the below block to the driver manifest before installing the driver. This ensures to install external
@@ -142,21 +163,6 @@ node:
enabled: false
```
-#### Installation example
-
-You can install CRDs and default snapshot controller by running following commands:
-```bash
-git clone https://github.com/kubernetes-csi/external-snapshotter/
-cd ./external-snapshotter
-git checkout release-
-kubectl kustomize client/config/crd | kubectl create -f -
-kubectl -n kube-system kustomize deploy/kubernetes/snapshot-controller | kubectl create -f -
-```
-
-*NOTE:*
-- It is recommended to use 5.0.x version of snapshotter/snapshot-controller.
-- The CSI external-snapshotter sidecar is installed along with the driver and does not involve any extra configuration.
-
### (Optional) Replication feature Requirements
Applicable only if you decided to enable the Replication feature in `values.yaml`
@@ -174,7 +180,7 @@ CRDs should be configured during replication prepare stage with repctl as descri
## Install the Driver
**Steps**
-1. Run `git clone -b v2.2.0 https://github.com/dell/csi-powerstore.git` to clone the git repository.
+1. Run `git clone -b v2.3.0 https://github.com/dell/csi-powerstore.git` to clone the git repository.
2. Ensure that you have created namespace where you want to install the driver. You can run `kubectl create namespace csi-powerstore` to create a new one. "csi-powerstore" is just an example. You can choose any name for the namespace.
But make sure to align to the same namespace during the whole installation.
3. Check `helm/csi-powerstore/driver-image.yaml` and confirm the driver image points to new image.
@@ -184,16 +190,16 @@ CRDs should be configured during replication prepare stage with repctl as descri
- *username*, *password*: defines credentials for connecting to array.
- *skipCertificateValidation*: defines if we should use insecure connection or not.
- *isDefault*: defines if we should treat the current array as a default.
- - *blockProtocol*: defines what transport protocol we should use (FC, ISCSI, NVMeTCP, None, or auto).
+ - *blockProtocol*: defines what transport protocol we should use (FC, ISCSI, NVMeTCP, NVMeFC, None, or auto).
- *nasName*: defines what NAS should be used for NFS volumes.
- *nfsAcls* (Optional): defines permissions - POSIX mode bits or NFSv4 ACLs, to be set on NFS target mount directory.
NFSv4 ACls are supported for NFSv4 shares on NFSv4 enabled NAS servers only. POSIX ACLs are not supported and only POSIX mode bits are supported for NFSv3 shares.
Add more blocks similar to above for each PowerStore array if necessary.
-5. Create storage classes using ones from `samples/storageclass` folder as an example and apply them to the Kubernetes cluster by running `kubectl create -f `
-
+5. Create the secret by running ```kubectl create secret generic powerstore-config -n csi-powerstore --from-file=config=secret.yaml```
+6. Create storage classes using ones from `samples/storageclass` folder as an example and apply them to the Kubernetes cluster by running `kubectl create -f `
+
> If you do not specify `arrayID` parameter in the storage class then the array that was specified as the default would be used for provisioning volumes.
-6. Create the secret by running ```kubectl create secret generic powerstore-config -n csi-powerstore --from-file=config=secret.yaml```
7. Copy the default values.yaml file `cd dell-csi-helm-installer && cp ../helm/csi-powerstore/values.yaml ./my-powerstore-settings.yaml`
8. Edit the newly created values file and provide values for the following parameters `vi my-powerstore-settings.yaml`:
@@ -221,6 +227,7 @@ CRDs should be configured during replication prepare stage with repctl as descri
| node.nodeSelector | Defines what nodes would be selected for pods of node daemonset | Yes | " " |
| node.tolerations | Defines tolerations that would be applied to node daemonset | Yes | " " |
| fsGroupPolicy | Defines which FS Group policy mode to be used, Supported modes `None, File and ReadWriteOnceWithFSType` | No | "ReadWriteOnceWithFSType" |
+| controller.vgsnapshot.enabled | To enable or disable the volume group snapshot feature | No | "true" |
8. Install the driver using `csi-install.sh` bash script by running `./csi-install.sh --namespace csi-powerstore --values ./my-powerstore-settings.yaml`
- After that the driver should be installed, you can check the condition of driver pods by running `kubectl get all -n csi-powerstore`
@@ -257,7 +264,7 @@ There are samples storage class yaml files available under `samples/storageclass
allowedTopologies:
- matchLabelExpressions:
- key: csi-powerstore.dellemc.com/12.34.56.78-iscsi
-# replace "-iscsi" with "-fc", "-nvme" or "-nfs" at the end to use FC, NVMe or NFS enabled hosts
+# replace "-iscsi" with "-fc", "-nvmetcp" or "-nvmefc" or "-nfs" at the end to use FC, NVMeTCP, NVMeFC or NFS enabled hosts
# replace "12.34.56.78" with PowerStore endpoint IP
values:
- "true"
@@ -272,15 +279,15 @@ kubectl create -f
## Volume Snapshot Class
-Starting CSI PowerStore v1.4, `dell-csi-helm-installer` will not create any Volume Snapshot Class during the driver installation. There is a sample Volume Snapshot Class manifest present in the _samples/volumesnapshotclass_ folder. Please use this sample to create a new Volume Snapshot Class to create Volume Snapshots.
+Starting CSI PowerStore v1.4.0, `dell-csi-helm-installer` will not create any Volume Snapshot Class during the driver installation. There is a sample Volume Snapshot Class manifest present in the _samples/volumesnapshotclass_ folder. Please use this sample to create a new Volume Snapshot Class to create Volume Snapshots.
### What happens to my existing Volume Snapshot Classes?
-*Upgrading from CSI PowerStore v2.1 driver*:
+*Upgrading from CSI PowerStore v2.1.0 driver*:
The existing volume snapshot class will be retained.
*Upgrading from an older version of the driver*:
-It is strongly recommended to upgrade the earlier versions of CSI PowerStore to 1.4 or higher, before upgrading to 2.2.
+It is strongly recommended to upgrade the earlier versions of CSI PowerStore to 1.4.0 or higher, before upgrading to 2.3.0.
## Dynamically update the powerstore secrets
diff --git a/content/v1/csidriver/installation/helm/unity.md b/content/v1/csidriver/installation/helm/unity.md
index 0db49246f5..38000db82b 100644
--- a/content/v1/csidriver/installation/helm/unity.md
+++ b/content/v1/csidriver/installation/helm/unity.md
@@ -1,14 +1,14 @@
---
-title: Unity
+title: Unity XT
description: >
- Installing CSI Driver for Unity via Helm
+ Installing CSI Driver for Unity XT via Helm
---
-The CSI Driver for Dell Unity can be deployed by using the provided Helm v3 charts and installation scripts on both Kubernetes and OpenShift platforms. For more detailed information on the installation scripts, review the script [documentation](https://github.com/dell/csi-unity/tree/master/dell-csi-helm-installer).
+The CSI Driver for Dell Unity XT can be deployed by using the provided Helm v3 charts and installation scripts on both Kubernetes and OpenShift platforms. For more detailed information on the installation scripts, review the script [documentation](https://github.com/dell/csi-unity/tree/master/dell-csi-helm-installer).
The controller section of the Helm chart installs the following components in a _Deployment_:
-- CSI Driver for Unity
+- CSI Driver for Unity XT
- Kubernetes External Provisioner, which provisions the volumes
- Kubernetes External Attacher, which attaches the volumes to the containers
- Kubernetes External Snapshotter, which provides snapshot support
@@ -17,29 +17,78 @@ The controller section of the Helm chart installs the following components in a
The node section of the Helm chart installs the following component in a _DaemonSet_:
-- CSI Driver for Unity
+- CSI Driver for Unity XT
- Kubernetes Node Registrar, which handles the driver registration
## Prerequisites
-Before you install CSI Driver for Unity, verify the requirements that are mentioned in this topic are installed and configured.
+Before you install CSI Driver for Unity XT, verify the requirements that are mentioned in this topic are installed and configured.
### Requirements
* Install Kubernetes or OpenShift (see [supported versions](../../../../csidriver/#features-and-capabilities))
* Install Helm v3
-* To use FC protocol, the host must be zoned with Unity array and Multipath needs to be configured
+* To use FC protocol, the host must be zoned with Unity XT array and Multipath needs to be configured
* To use iSCSI protocol, iSCSI initiator utils packages needs to be installed and Multipath needs to be configured
* To use NFS protocol, NFS utility packages needs to be installed
* Mount propagation is enabled on container runtime that is being used
+### Install Helm 3.0
+
+Install Helm 3.0 on the master node before you install the CSI Driver for Dell Unity XT.
+
+**Steps**
+
+Run the `curl https://raw.githubusercontent.com/helm/helm/master/scripts/get-helm-3 | bash` command to install Helm 3.0.
+
+
+### Fibre Channel requirements
+
+Dell Unity XT supports Fibre Channel communication. If you use the Fibre Channel protocol, ensure that the
+following requirement is met before you install the CSI Driver for Dell Unity XT:
+- Zoning of the Host Bus Adapters (HBAs) to the Fibre Channel port must be done.
+
+
+### Set up the iSCSI Initiator
+The CSI Driver for Dell Unity XT supports iSCSI connectivity.
+
+If you use the iSCSI protocol, set up the iSCSI initiators as follows:
+- Ensure that the iSCSI initiators are available on both Controller and Worker nodes.
+- Kubernetes nodes must have access (network connectivity) to an iSCSI port on the Dell Unity XT array that
+ has IP interfaces. Manually create IP routes for each node that connects to the Dell Unity XT.
+- All Kubernetes nodes must have the _iscsi-initiator-utils_ package for CentOS/RHEL or _open-iscsi_ package for Ubuntu installed, and the _iscsid_ service must be enabled and running.
+ To do this, run the `systemctl enable --now iscsid` command.
+- Ensure that the unique initiator name is set in _/etc/iscsi/initiatorname.iscsi_.
+
+For more information about configuring iSCSI, see [Dell Host Connectivity guide](https://www.delltechnologies.com/asset/zh-tw/products/storage/technical-support/docu5128.pdf).
+
+### Linux multipathing requirements
+Dell Unity XT supports Linux multipathing. Configure Linux multipathing before installing the CSI Driver for Dell
+Unity XT.
+
+Set up Linux multipathing as follows:
+- Ensure that all nodes have the _Device Mapper Multipathing_ package installed.
+> You can install it by running `yum install device-mapper-multipath` on CentOS or `apt install multipath-tools` on Ubuntu. This package should create a multipath configuration file located in `/etc/multipath.conf`.
+- Enable multipathing using the `mpathconf --enable --with_multipathd y` command.
+- Enable `user_friendly_names` and `find_multipaths` in the `multipath.conf` file.
+- Ensure that the multipath command for `multipath.conf` is available on all Kubernetes nodes.
+
+As a best practice, use the following options to help the operating system and the mulitpathing software detect path changes efficiently:
+```text
+path_grouping_policy multibus
+path_checker tur
+features "1 queue_if_no_path"
+path_selector "round-robin 0"
+no_path_retry 10
+```
+
## Install CSI Driver
-Install CSI Driver for Unity using this procedure.
+Install CSI Driver for Unity XT using this procedure.
*Before you begin*
- * You must have the downloaded files, including the Helm chart from the source [git repository](https://github.com/dell/csi-unity) with the command ```git clone -b v2.2.0 https://github.com/dell/csi-unity.git```, as a pre-requisite for running this procedure.
+ * You must have the downloaded files, including the Helm chart from the source [git repository](https://github.com/dell/csi-unity) with the command ```git clone -b v2.3.0 https://github.com/dell/csi-unity.git```, as a pre-requisite for running this procedure.
* In the top-level dell-csi-helm-installer directory, there should be two scripts, `csi-install.sh` and `csi-uninstall.sh`.
* Ensure _unity_ namespace exists in Kubernetes cluster. Use the `kubectl create namespace unity` command to create the namespace if the namespace is not present.
@@ -47,19 +96,19 @@ Install CSI Driver for Unity using this procedure.
Procedure
-1. Collect information from the Unity Systems like Unique ArrayId, IP address, username, and password. Make a note of the value for these parameters as they must be entered in the `secret.yaml` and `myvalues.yaml` file.
+1. Collect information from the Unity XT Systems like Unique ArrayId, IP address, username, and password. Make a note of the value for these parameters as they must be entered in the `secret.yaml` and `myvalues.yaml` file.
**Note**:
- * ArrayId corresponds to the serial number of Unity array.
- * Unity Array username must have role as Storage Administrator to be able to perform CRUD operations.
+ * ArrayId corresponds to the serial number of Unity XT array.
+ * Unity XT Array username must have role as Storage Administrator to be able to perform CRUD operations.
* If the user is using complex K8s version like "v1.21.3-mirantis-1", use below kubeVersion check in helm/csi-unity/Chart.yaml file.
- kubeVersion: ">= 1.21.0-0 < 1.24.0-0"
+ kubeVersion: ">= 1.21.0-0 < 1.25.0-0"
2. Copy the `helm/csi-unity/values.yaml` into a file named `myvalues.yaml` in the same directory of `csi-install.sh`, to customize settings for installation.
3. Edit `myvalues.yaml` to set the following parameters for your installation:
- The following table lists the primary configurable parameters of the Unity driver chart and their default values. More detailed information can be found in the [`values.yaml`](https://github.com/dell/csi-unity/blob/master/helm/csi-unity/values.yaml) file in this repository.
+ The following table lists the primary configurable parameters of the Unity XT driver chart and their default values. More detailed information can be found in the [`values.yaml`](https://github.com/dell/csi-unity/blob/master/helm/csi-unity/values.yaml) file in this repository.
| Parameter | Description | Required | Default |
| --------- | ----------- | -------- |-------- |
@@ -127,12 +176,12 @@ Procedure
5. Prepare the `secret.yaml` for driver configuration.
The following table lists driver configuration parameters for multiple storage arrays.
- | Parameter | Description | Required | Default |
- | ------------------------- | ----------------------------------- | -------- |-------- |
- | storageArrayList.username | Username for accessing Unity system | true | - |
- | storageArrayList.password | Password for accessing Unity system | true | - |
- | storageArrayList.endpoint | REST API gateway HTTPS endpoint Unity system| true | - |
- | storageArrayList.arrayId | ArrayID for Unity system | true | - |
+ | Parameter | Description | Required | Default |
+ | ------------------------- | ---------------------------------------------- | -------- |-------- |
+ | storageArrayList.username | Username for accessing Unity XT system | true | - |
+ | storageArrayList.password | Password for accessing Unity XT system | true | - |
+ | storageArrayList.endpoint | REST API gateway HTTPS endpoint Unity XT system| true | - |
+ | storageArrayList.arrayId | ArrayID for Unity XT system | true | - |
| storageArrayList.skipCertificateValidation | "skipCertificateValidation " determines if the driver is going to validate unisphere certs while connecting to the Unisphere REST API interface. If it is set to false, then a secret unity-certs has to be created with an X.509 certificate of CA which signed the Unisphere certificate. | true | true |
| storageArrayList.isDefault| An array having isDefault=true or isDefaultArray=true will be considered as the default array when arrayId is not specified in the storage class. This parameter should occur only once in the list. | true | - |
@@ -227,7 +276,7 @@ Procedure
-7. Run the `./csi-install.sh --namespace unity --values ./myvalues.yaml` command to proceed with the installation.
+7. Run the `./csi-install.sh --namespace unity --values ./myvalues.yaml` command to proceed with the installation using bash script.
A successful installation must display messages that look similar to the following samples:
```
@@ -294,13 +343,27 @@ Procedure
At the end of the script unity-controller Deployment and DaemonSet unity-node will be ready, execute command `kubectl get pods -n unity` to get the status of the pods and you will see the following:
- * One or more Unity Controller (based on controllerCount) with 5/5 containers ready, and status displayed as Running.
- * Agent pods with 2/2 containers and the status displayed as Running.
-
+ * One or more Unity XT Controllers (based on controllerCount) with 5/5 containers ready, and status displayed as Running.
+ * Agent pods with 2/2 containers and the status displayed as Running.
+
+ **Note**:
+ To install nightly or latest csi driver build using bash script use this command:
+ `/csi-install.sh --namespace unity --values ./myvalues.yaml --version nightly/latest`
+
+8. You can also install the driver using standalone helm chart by running helm install command, first using the --dry-run flag to
+ confirm various parameters are as desired. Once the parameters are validated, run the command without the --dry-run flag.
+ Note: This example assumes that the user is at repo root helm folder i.e csi-unity/helm.
+
+ **Syntax**:`helm install --dry-run --values --namespace `
+ `` - namespace of the driver installation.
+ `` - unity in case of unity-creds and unity-certs-0 secrets.
+ `` - Path of the helm directory.
+ e.g: helm install --dry-run --values ./csi-unity/myvalues.yaml --namespace unity unity ./csi-unity
+
## Certificate validation for Unisphere REST API calls
-This topic provides details about setting up the certificate validation for the CSI Driver for Dell Unity.
+This topic provides details about setting up the Dell Unity XT certificate validation for the CSI Driver.
*Before you begin*
@@ -334,15 +397,15 @@ If the Unisphere certificate is self-signed or if you are using an embedded Unis
## Volume Snapshot Class
-For CSI Driver for Unity version 1.6 and later, `dell-csi-helm-installer` does not create any Volume Snapshot classes as part of the driver installation. A wide set of annotated storage class manifests have been provided in the `csi-unity/samples/volumesnapshotclass/` folder. Use these samples to create new Volume Snapshot to provision storage.
+A wide set of annotated storage class manifests have been provided in the [csi-unity/samples/volumesnapshotclass/](https://github.com/dell/csi-unity/tree/main/samples/volumesnapshotclass) folder. Use these samples to create new Volume Snapshot to provision storage.
### What happens to my existing Volume Snapshot Classes?
-*Upgrading from CSI Unity v2.1 driver*:
+*Upgrading from CSI Unity XT v2.1.0 driver*:
The existing volume snapshot class will be retained.
*Upgrading from an older version of the driver*:
-It is strongly recommended to upgrade the earlier versions of CSI Unity to 1.6 or higher, before upgrading to 2.2.
+It is strongly recommended to upgrade the earlier versions of CSI Unity XT to v1.6.0 or higher, before upgrading to v2.3.0.
## Storage Classes
@@ -350,7 +413,7 @@ Storage Classes are an essential Kubernetes construct for Storage provisioning.
A wide set of annotated storage class manifests have been provided in the [samples/storageclass](https://github.com/dell/csi-unity/tree/master/samples/storageclass) folder. Use these samples to create new storage classes to provision storage.
-For CSI Driver for Unity, a wide set of annotated storage class manifests have been provided in the `csi-unity/samples/storageclass` folder. Use these samples to create new storage classes to provision storage.
+For the Unity XT CSI Driver, a wide set of annotated storage class manifests have been provided in the `csi-unity/samples/storageclass` folder. Use these samples to create new storage classes to provision storage.
### What happens to my existing storage classes?
@@ -393,9 +456,7 @@ User can update secret using the following command:
```
**Note**: Updating unity-certs-x secrets is a manual process, unlike unity-creds. Users have to re-install the driver in case of updating/adding the SSL certificates or changing the certSecretCount parameter.
-## Dynamic Logging Configuration
-
-This feature is introduced in CSI Driver for unity version 2.0.0.
+## Dynamic Logging Configuration
### Helm based installation
As part of driver installation, a ConfigMap with the name `unity-config-params` is created, which contains an attribute `CSI_LOG_LEVEL` which specifies the current log level of CSI driver.
diff --git a/content/v1/csidriver/installation/offline/_index.md b/content/v1/csidriver/installation/offline/_index.md
index 07b0000bdb..127d35c937 100644
--- a/content/v1/csidriver/installation/offline/_index.md
+++ b/content/v1/csidriver/installation/offline/_index.md
@@ -12,7 +12,7 @@ This includes the following drivers:
* [PowerMax](https://github.com/dell/csi-powermax)
* [PowerScale](https://github.com/dell/csi-powerscale)
* [PowerStore](https://github.com/dell/csi-powerstore)
-* [Unity](https://github.com/dell/csi-unity)
+* [Unity XT](https://github.com/dell/csi-unity)
As well as the Dell CSI Operator
* [Dell CSI Operator](https://github.com/dell/dell-csi-operator)
@@ -65,7 +65,7 @@ The resulting offline bundle file can be copied to another machine, if necessary
For example, here is the output of a request to build an offline bundle for the Dell CSI Operator:
```
-git clone -b v1.7.0 https://github.com/dell/dell-csi-operator.git
+git clone -b v1.8.0 https://github.com/dell/dell-csi-operator.git
```
```
cd dell-csi-operator/scripts
diff --git a/content/v1/csidriver/installation/operator/_index.md b/content/v1/csidriver/installation/operator/_index.md
index be62fc2dec..68113a0e90 100644
--- a/content/v1/csidriver/installation/operator/_index.md
+++ b/content/v1/csidriver/installation/operator/_index.md
@@ -50,21 +50,21 @@ If you have installed an old version of the `dell-csi-operator` which was availa
#### Full list of CSI Drivers and versions supported by the Dell CSI Operator
| CSI Driver | Version | ConfigVersion | Kubernetes Version | OpenShift Version |
| ------------------ | --------- | -------------- | -------------------- | --------------------- |
-| CSI PowerMax | 2.0.0 | v2.0.0 | 1.20, 1.21, 1.22 | 4.6 EUS, 4.7, 4.8 |
| CSI PowerMax | 2.1.0 | v2.1.0 | 1.20, 1.21, 1.22 | 4.8, 4.8 EUS, 4.9 |
| CSI PowerMax | 2.2.0 | v2.2.0 | 1.21, 1.22, 1.23 | 4.8, 4.8 EUS, 4.9 |
-| CSI PowerFlex | 2.0.0 | v2.0.0 | 1.20, 1.21, 1.22 | 4.6 EUS, 4.7, 4.8 |
+| CSI PowerMax | 2.3.0 | v2.3.0 | 1.22, 1.23, 1.24 | 4.9, 4.10, 4.10 EUS |
| CSI PowerFlex | 2.1.0 | v2.1.0 | 1.20, 1.21, 1.22 | 4.8, 4.8 EUS, 4.9 |
| CSI PowerFlex | 2.2.0 | v2.2.0 | 1.21, 1.22, 1.23 | 4.8, 4.8 EUS, 4.9 |
-| CSI PowerScale | 2.0.0 | v2.0.0 | 1.20, 1.21, 1.22 | 4.6 EUS, 4.7, 4.8 |
+| CSI PowerFlex | 2.3.0 | v2.3.0 | 1.22, 1.23, 1.24 | 4.9, 4.10, 4.10 EUS |
| CSI PowerScale | 2.1.0 | v2.1.0 | 1.20, 1.21, 1.22 | 4.8, 4.8 EUS, 4.9 |
| CSI PowerScale | 2.2.0 | v2.2.0 | 1.21, 1.22, 1.23 | 4.8, 4.8 EUS, 4.9 |
-| CSI Unity | 2.0.0 | v2.0.0 | 1.20, 1.21, 1.22 | 4.6 EUS, 4.7, 4.8 |
-| CSI Unity | 2.1.0 | v2.1.0 | 1.20, 1.21, 1.22 | 4.8, 4.8 EUS, 4.9 |
-| CSI Unity | 2.2.0 | v2.2.0 | 1.21, 1.22, 1.23 | 4.8, 4.8 EUS, 4.9 |
-| CSI PowerStore | 2.0.0 | v2.0.0 | 1.20, 1.21, 1.22 | 4.6 EUS, 4.7, 4.8 |
+| CSI PowerScale | 2.3.0 | v2.3.0 | 1.22, 1.23, 1.24 | 4.9, 4.10, 4.10 EUS |
+| CSI Unity XT | 2.1.0 | v2.1.0 | 1.20, 1.21, 1.22 | 4.8, 4.8 EUS, 4.9 |
+| CSI Unity XT | 2.2.0 | v2.2.0 | 1.21, 1.22, 1.23 | 4.8, 4.8 EUS, 4.9 |
+| CSI Unity XT | 2.3.0 | v2.3.0 | 1.22, 1.23, 1.24 | 4.9, 4.10, 4.10 EUS |
| CSI PowerStore | 2.1.0 | v2.1.0 | 1.20, 1.21, 1.22 | 4.8, 4.8 EUS, 4.9 |
| CSI PowerStore | 2.2.0 | v2.2.0 | 1.21, 1.22, 1.23 | 4.8, 4.8 EUS, 4.9 |
+| CSI PowerStore | 2.3.0 | v2.3.0 | 1.22, 1.23, 1.24 | 4.9, 4.10, 4.10 EUS |
@@ -97,7 +97,7 @@ $ kubectl create configmap dell-csi-operator-config --from-file config.tar.gz -n
#### Steps
>**Skip step 1 for "offline bundle installation" and continue using the workspace created by untar of dell-csi-operator-bundle.tar.gz.**
-1. Clone and checkout the required dell-csi-operator version using `git clone -b v1.7.0 https://github.com/dell/dell-csi-operator.git`.
+1. Clone and checkout the required dell-csi-operator version using `git clone -b v1.8.0 https://github.com/dell/dell-csi-operator.git`.
2. cd dell-csi-operator
3. Run `bash scripts/install.sh` to install the operator.
>NOTE: Dell CSI Operator version 1.4.0 and higher would install to the 'dell-csi-operator' namespace by default.
@@ -126,7 +126,7 @@ For installation of the supported drivers, a `CustomResource` has to be created
### Pre-requisites for upstream Kubernetes Clusters
On upstream Kubernetes clusters, make sure to install
* VolumeSnapshot CRDs
- * On clusters running v1.21,v1.22 & v1.23, make sure to install v1 VolumeSnapshot CRDs
+ * On clusters running v1.22,v1.23 & v1.24, make sure to install v1 VolumeSnapshot CRDs
* External Volume Snapshot Controller with the correct version
### Pre-requisites for Red Hat OpenShift Clusters
@@ -144,7 +144,7 @@ metadata:
spec:
config:
ignition:
- version: 2.2.0
+ version: 3.2.0
systemd:
units:
- name: "iscsid.service"
@@ -187,7 +187,7 @@ metadata:
spec:
config:
ignition:
- version: 2.2.0
+ version: 3.2.0
storage:
files:
- contents:
@@ -257,9 +257,9 @@ If you are installing the latest versions of the CSI drivers, the driver control
The CSI Drivers installed by the Dell CSI Operator can be updated like any Kubernetes resource. This can be achieved in various ways which include –
* Modifying the installation directly via `kubectl edit`
- For e.g. - If the name of the installed unity driver is unity, then run
+ For example - If the name of the installed Unity XT driver is unity, then run
```
- # Replace driver-namespace with the namespace where the Unity driver is installed
+ # Replace driver-namespace with the namespace where the Unity XT driver is installed
$ kubectl edit csiunity/unity -n
```
and modify the installation. The usual fields to edit are the version of drivers and sidecars and the env variables.
@@ -274,7 +274,7 @@ The below notes explain some of the general items to take care of.
1. If you are trying to upgrade the CSI driver from an older version, make sure to modify the _configVersion_ field if required.
```yaml
driver:
- configVersion: v2.2.0
+ configVersion: v2.3.0
```
2. Volume Health Monitoring feature is optional and by default this feature is disabled for drivers when installed via operator.
To enable this feature, we will have to modify the below block while upgrading the driver.To get the volume health state add
@@ -308,13 +308,13 @@ The below notes explain some of the general items to take care of.
name: snapshotter
- args:
- --monitor-interval=60s
- image: gcr.io/k8s-staging-sig-storage/csi-external-health-monitor-controller:v0.4.0
+ image: gcr.io/k8s-staging-sig-storage/csi-external-health-monitor-controller:v0.5.0
imagePullPolicy: IfNotPresent
name: external-health-monitor
- image: k8s.gcr.io/sig-storage/csi-attacher:v3.4.0
imagePullPolicy: IfNotPresent
name: attacher
- - image: k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.5.0
+ - image: k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.5.1
imagePullPolicy: IfNotPresent
name: registrar
- image: k8s.gcr.io/sig-storage/csi-resizer:v1.4.0
@@ -348,7 +348,7 @@ data:
* Adding (supported) environment variables
* Updating the image of the driver
## Limitations
-* The Dell CSI Operator can't manage any existing driver installed using Helm charts. If you already have installed one of the DellEMC CSI driver in your cluster and want to use the operator based deployment, uninstall the driver and then redeploy the driver following the installation procedure described above
+* The Dell CSI Operator can't manage any existing driver installed using Helm charts. If you already have installed one of the Dell CSI drivers in your cluster and want to use the operator based deployment, uninstall the driver and then redeploy the driver following the installation procedure described.
* The Dell CSI Operator is not fully compliant with the OperatorHub React UI elements and some of the Custom Resource fields may show up as invalid or unsupported in the OperatorHub GUI. To get around this problem, use kubectl/oc commands to get details about the Custom Resource(CR). This issue will be fixed in the upcoming releases of the Dell CSI Operator
diff --git a/content/v1/csidriver/installation/operator/isilon.md b/content/v1/csidriver/installation/operator/isilon.md
index 00e4c69924..6b5fcef159 100644
--- a/content/v1/csidriver/installation/operator/isilon.md
+++ b/content/v1/csidriver/installation/operator/isilon.md
@@ -116,6 +116,7 @@ User can query for CSI-PowerScale driver using the following command:
| --------- | ----------- | -------- |-------- |
| dnsPolicy | Determines the DNS Policy of the Node service | Yes | ClusterFirstWithHostNet |
| fsGroupPolicy | Defines which FS Group policy mode to be used, Supported modes `None, File and ReadWriteOnceWithFSType` | No | "ReadWriteOnceWithFSType" |
+ | X_CSI_MAX_PATH_LIMIT | Defines the maximum length of path for a volume | No | 192 |
| ***Common parameters for node and controller*** |
| CSI_ENDPOINT | The UNIX socket address for handling gRPC calls | No | /var/run/csi/csi.sock |
| X_CSI_ISI_SKIP_CERTIFICATE_VALIDATION | Specifies whether SSL security needs to be enabled for communication between PowerScale and CSI Driver | No | true |
@@ -150,7 +151,7 @@ User can query for CSI-PowerScale driver using the following command:
3. Also, snapshotter and resizer sidecars are not optional to choose, it comes default with Driver installation.
## Volume Health Monitoring
-This feature is introduced in CSI Driver for unity version 2.1.0.
+This feature is introduced in CSI Driver for PowerScale version 2.1.0.
### Operator based installation
diff --git a/content/v1/csidriver/installation/operator/powerflex.md b/content/v1/csidriver/installation/operator/powerflex.md
index ea959f4639..73350f7aa5 100644
--- a/content/v1/csidriver/installation/operator/powerflex.md
+++ b/content/v1/csidriver/installation/operator/powerflex.md
@@ -14,6 +14,7 @@ There are sample manifests provided which can be edited to do an easy installati
Kubernetes Operators make it easy to deploy and manage the entire lifecycle of complex Kubernetes applications. Operators use Custom Resource Definitions (CRD) which represents the application and use custom controllers to manage them.
### Prerequisites:
+- If multipath is configured, ensure CSI-PowerFlex volumes are blacklisted by multipathd. See [troubleshooting section](../../../troubleshooting/powerflex.md) for details
#### SDC Deployment for Operator
- This feature deploys the sdc kernel modules on all nodes with the help of an init container.
- For non-supported versions of the OS also do the manual SDC deployment steps given below. Refer to https://hub.docker.com/r/dellemc/sdc for supported versions.
@@ -144,6 +145,7 @@ For detailed PowerFlex installation procedure, see the _Dell PowerFlex Deploymen
| Parameter | Description | Required | Default |
| --------- | ----------- | -------- |-------- |
| replicas | Controls the number of controller pods you deploy. If the number of controller pods is greater than the number of available nodes, excess pods will become stay in a pending state. Defaults are 2 which allows for Controller high availability. | Yes | 2 |
+ | fsGroupPolicy | Defines which FS Group policy mode to be used, Supported modes `None, File and ReadWriteOnceWithFSType` | No | "ReadWriteOnceWithFSType" |
| ***Common parameters for node and controller*** |
| X_CSI_VXFLEXOS_ENABLELISTVOLUMESNAPSHOT | Enable list volume operation to include snapshots (since creating a volume from a snap actually results in a new snap) | No | false |
| X_CSI_VXFLEXOS_ENABLESNAPSHOTCGDELETE | Enable this to automatically delete all snapshots in a consistency group when a snap in the group is deleted | No | false |
@@ -151,20 +153,21 @@ For detailed PowerFlex installation procedure, see the _Dell PowerFlex Deploymen
| X_CSI_ALLOW_RWO_MULTI_POD_ACCESS | Setting allowRWOMultiPodAccess to "true" will allow multiple pods on the same node to access the same RWO volume. This behavior conflicts with the CSI specification version 1.3. NodePublishVolume description that requires an error to be returned in this case. However, some other CSI drivers support this behavior and some customers desire this behavior. Customers use this option at their own risk. | No | false |
5. Execute the `kubectl create -f ` command to create PowerFlex custom resource. This command will deploy the CSI-PowerFlex driver.
- Example CR for PowerFlex Driver
- ```yaml
- apiVersion: storage.dell.com/v1
+```yaml
+apiVersion: storage.dell.com/v1
kind: CSIVXFlexOS
metadata:
name: test-vxflexos
namespace: test-vxflexos
spec:
driver:
- configVersion: v2.2.0
+ configVersion: v2.3.0
replicas: 1
dnsPolicy: ClusterFirstWithHostNet
forceUpdate: false
+ fsGroupPolicy: File
common:
- image: "dellemc/csi-vxflexos:v2.2.0"
+ image: "dellemc/csi-vxflexos:v2.3.0"
imagePullPolicy: IfNotPresent
envs:
- name: X_CSI_VXFLEXOS_ENABLELISTVOLUMESNAPSHOT
diff --git a/content/v1/csidriver/installation/operator/powermax.md b/content/v1/csidriver/installation/operator/powermax.md
index 781eb18fe7..7c1e13c246 100644
--- a/content/v1/csidriver/installation/operator/powermax.md
+++ b/content/v1/csidriver/installation/operator/powermax.md
@@ -16,6 +16,27 @@ Kubernetes Operators make it easy to deploy and manage the entire lifecycle of c
### Prerequisite
+#### Fibre Channel Requirements
+
+CSI Driver for Dell PowerMax supports Fibre Channel communication. Ensure that the following requirements are met before you install CSI Driver:
+- Zoning of the Host Bus Adapters (HBAs) to the Fibre Channel port director must be completed.
+- Ensure that the HBA WWNs (initiators) appear on the list of initiators that are logged into the array.
+- If the number of volumes that will be published to nodes is high, then configure the maximum number of LUNs for your HBAs on each node. See the appropriate HBA document to configure the maximum number of LUNs.
+
+#### iSCSI Requirements
+
+The CSI Driver for Dell PowerMax supports iSCSI connectivity. These requirements are applicable for the nodes that use iSCSI initiator to connect to the PowerMax arrays.
+
+Set up the iSCSI initiators as follows:
+- All Kubernetes nodes must have the _iscsi-initiator-utils_ package installed.
+- Ensure that the iSCSI initiators are available on all the nodes where the driver node plugin will be installed.
+- Kubernetes nodes should have access (network connectivity) to an iSCSI director on the Dell PowerMax array that has IP interfaces. Manually create IP routes for each node that connects to the Dell PowerMax if required.
+- Ensure that the iSCSI initiators on the nodes are not a part of any existing Host (Initiator Group) on the Dell PowerMax array.
+- The CSI Driver needs the port group names containing the required iSCSI director ports. These port groups must be set up on each Dell PowerMax array. All the port group names supplied to the driver must exist on each Dell PowerMax with the same name.
+
+For more information about configuring iSCSI, see [Dell Host Connectivity guide](https://www.delltechnologies.com/asset/zh-tw/products/storage/technical-support/docu5128.pdf).
+
+
#### Create secret for client-side TLS verification (Optional)
Create a secret named powermax-certs in the namespace where the CSI PowerMax driver will be installed. This is an optional step and is only required if you are setting the env variable X_CSI_POWERMAX_SKIP_CERTIFICATE_VALIDATION to false. See the detailed documentation on how to create this secret [here](../../helm/powermax#certificate-validation-for-unisphere-rest-api-calls).
@@ -57,6 +78,7 @@ Create a secret named powermax-certs in the namespace where the CSI PowerMax dri
| Parameter | Description | Required | Default |
| --------- | ----------- | -------- |-------- |
| replicas | Controls the number of controller Pods you deploy. If controller Pods are greater than the number of available nodes, excess Pods will become stuck in pending. The default is 2 which allows for Controller high availability. | Yes | 2 |
+ | fsGroupPolicy | Defines which FS Group policy mode to be used, Supported modes `None, File and ReadWriteOnceWithFSType` | No | "ReadWriteOnceWithFSType" |
| ***Common parameters for node and controller*** |
| X_CSI_K8S_CLUSTER_PREFIX | Define a prefix that is appended to all resources created in the array; unique per K8s/CSI deployment; max length - 3 characters | Yes | XYZ |
| X_CSI_POWERMAX_ENDPOINT | IP address of the Unisphere for PowerMax | Yes | https://0.0.0.0:8443 |
@@ -65,12 +87,56 @@ Create a secret named powermax-certs in the namespace where the CSI PowerMax dri
| X_CSI_MANAGED_ARRAYS | List of comma-separated array ID(s) which will be managed by the driver | Yes | - |
| X_CSI_POWERMAX_PROXY_SERVICE_NAME | Name of CSI PowerMax ReverseProxy service. Leave blank if not using reverse proxy | No | - |
| X_CSI_GRPC_MAX_THREADS | Number of concurrent grpc requests allowed per client | No | 4 |
+ | X_CSI_IG_MODIFY_HOSTNAME | Change any existing host names. When nodenametemplate is set, it changes the name to the specified format else it uses driver default host name format. | No | false |
+ | X_CSI_IG_NODENAME_TEMPLATE | Provide a template for the CSI driver to use while creating the Host/IG on the array for the nodes in the cluster. It is of the format a-b-c-%foo%-xyz where foo will be replaced by host name of each node in the cluster. | No | - |
| X_CSI_POWERMAX_DRIVER_NAME | Set custom CSI driver name. For more details on this feature see the related [documentation](../../../features/powermax/#custom-driver-name) | No | - |
| X_CSI_HEALTH_MONITOR_ENABLED | Enable/Disable health monitor of CSI volumes from Controller and Node plugin. Provides details of volume status, usage and volume condition. As a prerequisite, external-health-monitor sidecar section should be uncommented in samples which would install the sidecar | No | false |
| ***Node parameters***|
| X_CSI_POWERMAX_ISCSI_ENABLE_CHAP | Enable ISCSI CHAP authentication. For more details on this feature see the related [documentation](../../../features/powermax/#iscsi-chap) | No | false |
+ | X_CSI_TOPOLOGY_CONTROL_ENABLED | Enable/Disabe topology control. It filters out arrays, associated transport protocol available to each node and creates topology keys based on any such user input. | No | false |
5. Execute the following command to create the PowerMax custom resource:`kubectl create -f `. The above command will deploy the CSI-PowerMax driver.
+**Note** - If CSI driver is getting installed using OCP UI , create these two configmaps manually using the command `oc create -f `
+1. Configmap name powermax-config-params
+ ```yaml
+ apiVersion: v1
+ kind: ConfigMap
+ metadata:
+ name: powermax-config-params
+ namespace: test-powermax
+ data:
+ driver-config-params.yaml: |
+ CSI_LOG_LEVEL: "debug"
+ CSI_LOG_FORMAT: "JSON"
+ ```
+ 2. Configmap name node-topology-config
+ ```yaml
+ kind: ConfigMap
+ metadata:
+ name: node-topology-config
+ namespace: test-powermax
+ data:
+ topologyConfig.yaml: |
+ allowedConnections:
+ - nodeName: "node1"
+ rules:
+ - "000000000001:FC"
+ - "000000000002:FC"
+ - nodeName: "*"
+ rules:
+ - "000000000002:FC"
+ deniedConnections:
+ - nodeName: "node2"
+ rules:
+ - "000000000002:*"
+ - nodeName: "node3"
+ rules:
+ - "*:*"
+
+ ```
+
+
+
### CSI PowerMax ReverseProxy
CSI PowerMax ReverseProxy is an optional component that can be installed with the CSI PowerMax driver. For more details on this feature see the related [documentation](../../../features/powermax#csi-powermax-reverse-proxy).
@@ -113,7 +179,7 @@ metadata:
namespace: test-powermax # <- Set the namespace to where you will install the CSI PowerMax driver
spec:
# Image for CSI PowerMax ReverseProxy
- image: dellemc/csipowermax-reverseproxy:v1.4.0 # <- CSI PowerMax Reverse Proxy image
+ image: dellemc/csipowermax-reverseproxy:v2.1.0 # <- CSI PowerMax Reverse Proxy image
imagePullPolicy: Always
# TLS secret which contains SSL certificate and private key for the Reverse Proxy server
tlsSecret: csirevproxy-tls-secret
@@ -199,8 +265,8 @@ metadata:
namespace: test-powermax
spec:
driver:
- # Config version for CSI PowerMax v2.2.0 driver
- configVersion: v2.2.0
+ # Config version for CSI PowerMax v2.3.0 driver
+ configVersion: v2.3.0
# replica: Define the number of PowerMax controller nodes
# to deploy to the Kubernetes release
# Allowed values: n, where n > 0
@@ -209,8 +275,8 @@ spec:
dnsPolicy: ClusterFirstWithHostNet
forceUpdate: false
common:
- # Image for CSI PowerMax driver v2.2.0
- image: dellemc/csi-powermax:v2.2.0
+ # Image for CSI PowerMax driver v2.3.0
+ image: dellemc/csi-powermax:v2.3.0
# imagePullPolicy: Policy to determine if the image should be pulled prior to starting the container.
# Allowed values:
# Always: Always pull the image.
@@ -304,6 +370,14 @@ spec:
# Default value: false
- name: X_CSI_HEALTH_MONITOR_ENABLED
value: "false"
+ # X_CSI_TOPOLOGY_CONTROL_ENABLED provides a way to filter topology keys on a node based on array and transport protocol
+ # if enabled, user can create custom topology keys by editing node-topology-config configmap.
+ # Allowed values:
+ # true: enable the filtration based on config map
+ # false: disable the filtration based on config map
+ # Default value: false
+ - name: X_CSI_TOPOLOGY_CONTROL_ENABLED
+ value: "false"
---
apiVersion: v1
kind: ConfigMap
@@ -314,13 +388,57 @@ data:
driver-config-params.yaml: |
CSI_LOG_LEVEL: "debug"
CSI_LOG_FORMAT: "JSON"
-
+---
+apiVersion: v1
+kind: ConfigMap
+metadata:
+ name: node-topology-config
+ namespace: test-powermax
+data:
+ topologyConfig.yaml: |
+ # allowedConnections contains a list of (node, array and protocol) info for user allowed configuration
+ # For any given storage array ID and protocol on a Node, topology keys will be created for just those pair and
+ # every other configuration is ignored
+ # Please refer to the doc website about a detailed explanation of each configuration parameter
+ # and the various possible inputs
+ allowedConnections:
+ # nodeName: Name of the node on which user wants to apply given rules
+ # Allowed values:
+ # nodeName - name of a specific node
+ # * - all the nodes
+ # Examples: "node1", "*"
+ - nodeName: "node1"
+ # rules is a list of 'StorageArrayID:TransportProtocol' pair. ':' is required between both value
+ # Allowed values:
+ # StorageArrayID:
+ # - SymmetrixID : for specific storage array
+ # - "*" :- for all the arrays connected to the node
+ # TransportProtocol:
+ # - FC : Fibre Channel protocol
+ # - ISCSI : iSCSI protocol
+ # - "*" - for all the possible Transport Protocol
+ # Examples: "000000000001:FC", "000000000002:*", "*:FC", "*:*"
+ rules:
+ - "000000000001:FC"
+ - "000000000002:FC"
+ - nodeName: "*"
+ rules:
+ - "000000000002:FC"
+ # deniedConnections contains a list of (node, array and protocol) info for denied configurations by user
+ # For any given storage array ID and protocol on a Node, topology keys will be created for every other configuration but
+ # not these input pairs
+ deniedConnections:
+ - nodeName: "node2"
+ rules:
+ - "000000000002:*"
+ - nodeName: "node3"
+ rules:
+ - "*:*"
```
Note:
- - `dell-csi-operator` does not support the installation of CSI PowerMax ReverseProxy as a sidecar to the controller Pod. This facility is
- only present with `dell-csi-helm-installer`.
+ - `dell-csi-operator` does not support the installation of CSI PowerMax ReverseProxy as a sidecar to the controller Pod. This facility is only present with `dell-csi-helm-installer`.
- `Kubelet config dir path` is not yet configurable in case of Operator based driver installation.
- Also, snapshotter and resizer sidecars are not optional to choose, it comes default with Driver installation.
@@ -332,7 +450,7 @@ Volume Health Monitoring feature is optional and by default this feature is disa
To enable this feature, set `X_CSI_HEALTH_MONITOR_ENABLED` to `true` in the driver manifest under controller and node section. Also, install the `external-health-monitor` from `sideCars` section for controller plugin.
To get the volume health state `value` under controller should be set to true as seen below. To get the volume stats `value` under node should be set to true.
-
+```
# Install the 'external-health-monitor' sidecar accordingly.
# Allowed values:
# true: enable checking of health condition of CSI volumes
@@ -351,4 +469,40 @@ To get the volume health state `value` under controller should be set to true as
# Default value: false
- name: X_CSI_HEALTH_MONITOR_ENABLED
value: "true"
-```
\ No newline at end of file
+```
+
+## Support for custom topology keys
+
+This feature is introduced in CSI Driver for PowerMax version 2.3.0.
+
+### Operator based installation
+
+Support for custom topology keys is optional and by default this feature is disabled for drivers when installed via operator.
+
+X_CSI_TOPOLOGY_CONTROL_ENABLED provides a way to filter topology keys on a node based on array and transport protocol. If enabled, user can create custom topology keys by editing node-topology-config configmap.
+
+1. To enable this feature, set `X_CSI_TOPOLOGY_CONTROL_ENABLED` to `true` in the driver manifest under node section.
+
+```
+ # X_CSI_TOPOLOGY_CONTROL_ENABLED provides a way to filter topology keys on a node based on array and transport protocol
+ # if enabled, user can create custom topology keys by editing node-topology-config configmap.
+ # Allowed values:
+ # true: enable the filtration based on config map
+ # false: disable the filtration based on config map
+ # Default value: false
+ - name: X_CSI_TOPOLOGY_CONTROL_ENABLED
+ value: "false"
+```
+2. Edit the sample config map "node-topology-config" present in [sample CRD](#sample--crd-file-for--powermax) with appropriate values:
+
+ | Parameter | Description |
+ |-----------|--------------|
+ | allowedConnections | List of node, array and protocol info for user allowed configuration |
+ | allowedConnections.nodeName | Name of the node on which user wants to apply given rules |
+ | allowedConnections.rules | List of StorageArrayID:TransportProtocol pair |
+ | deniedConnections | List of node, array and protocol info for user denied configuration |
+ | deniedConnections.nodeName | Name of the node on which user wants to apply given rules |
+ | deniedConnections.rules | List of StorageArrayID:TransportProtocol pair |
+
+
+ >Note: Name of the configmap should always be `node-topology-config`.
diff --git a/content/v1/csidriver/installation/operator/powerstore.md b/content/v1/csidriver/installation/operator/powerstore.md
index ae60025943..d2b74a2896 100644
--- a/content/v1/csidriver/installation/operator/powerstore.md
+++ b/content/v1/csidriver/installation/operator/powerstore.md
@@ -30,7 +30,7 @@ Kubernetes Operators make it easy to deploy and manage the entire lifecycle of c
password: "password" # password for connecting to API
skipCertificateValidation: true # indicates if client side validation of (management)server's certificate can be skipped
isDefault: true # treat current array as a default (would be used by storage classes without arrayID parameter)
- blockProtocol: "auto" # what SCSI transport protocol use on node side (FC, ISCSI, NVMeTCP, None, or auto)
+ blockProtocol: "auto" # what SCSI transport protocol use on node side (FC, ISCSI, NVMeTCP, NVMeFC, None, or auto)
nasName: "nas-server" # what NAS should be used for NFS volumes
nfsAcls: "0777" # (Optional) defines permissions - POSIX mode bits or NFSv4 ACLs, to be set on NFS target mount directory.
# NFSv4 ACls are supported for NFSv4 shares on NFSv4 enabled NAS servers only. POSIX ACLs are not supported and only POSIX mode bits are supported for NFSv3 shares.
@@ -69,13 +69,13 @@ metadata:
namespace: test-powerstore
spec:
driver:
- configVersion: v2.2.0
+ configVersion: v2.3.0
replicas: 2
dnsPolicy: ClusterFirstWithHostNet
forceUpdate: false
fsGroupPolicy: ReadWriteOnceWithFSType
common:
- image: "dellemc/csi-powerstore:v2.2.0"
+ image: "dellemc/csi-powerstore:v2.3.0"
imagePullPolicy: IfNotPresent
envs:
- name: X_CSI_POWERSTORE_NODE_NAME_PREFIX
@@ -139,6 +139,7 @@ data:
| X_CSI_NFS_ACLS | Defines permissions - POSIX mode bits or NFSv4 ACLs, to be set on NFS target mount directory. | No | "0777" |
| ***Node parameters*** |
| X_CSI_POWERSTORE_ENABLE_CHAP | Set to true if you want to enable iSCSI CHAP feature | No | false |
+
6. Execute the following command to create PowerStore custom resource:`kubectl create -f `. The above command will deploy the CSI-PowerStore driver.
- After that the driver should be installed, you can check the condition of driver pods by running `kubectl get all -n `
@@ -177,7 +178,7 @@ volume stats value under node should be set to true.
## Dynamic Logging Configuration
-This feature is introduced in CSI Driver for unity version 2.0.0.
+This feature is introduced in CSI Driver for PowerStore version 2.0.0.
### Operator based installation
As part of driver installation, a ConfigMap with the name `powerstore-config-params` is created using the manifest located in the sample file. This ConfigMap contains attributes `CSI_LOG_LEVEL` which specifies the current log level of the CSI driver and `CSI_LOG_FORMAT` which specifies the current log format of the CSI driver. To set the default/initial log level user can set this field during driver installation.
diff --git a/content/v1/csidriver/installation/operator/unity.md b/content/v1/csidriver/installation/operator/unity.md
index 93c0bb0f2f..89e8b9a699 100644
--- a/content/v1/csidriver/installation/operator/unity.md
+++ b/content/v1/csidriver/installation/operator/unity.md
@@ -1,24 +1,24 @@
---
-title: Unity
+title: Unity XT
description: >
- Installing CSI Driver for Unity via Operator
+ Installing CSI Driver for Unity XT via Operator
---
-## CSI Driver for Unity
+## CSI Driver for Unity XT
### Pre-requisites
-#### Create secret to store Unity credentials
+#### Create secret to store Unity XT credentials
Create a namespace called unity (it can be any user-defined name; But commands in this section assumes that the namespace is unity)
Prepare the secret.yaml for driver configuration.
The following table lists driver configuration parameters for multiple storage arrays.
| Parameter | Description | Required | Default |
| --------- | ----------- | -------- |-------- |
-| username | Username for accessing Unity system | true | - |
-| password | Password for accessing Unity system | true | - |
-| restGateway | REST API gateway HTTPS endpoint Unity system| true | - |
-| arrayId | ArrayID for Unity system | true | - |
+| username | Username for accessing Unity XT system | true | - |
+| password | Password for accessing Unity XT system | true | - |
+| restGateway | REST API gateway HTTPS endpoint Unity XT system| true | - |
+| arrayId | ArrayID for Unity XT system | true | - |
| isDefaultArray | An array having isDefaultArray=true is for backward compatibility. This parameter should occur once in the list. | true | - |
Ex: secret.yaml
@@ -73,21 +73,21 @@ Execute command: ```kubectl create -f empty-secret.yaml```
Users should configure the parameters in CR. The following table lists the primary configurable parameters of the Unity driver and their default values:
- | Parameter | Description | Required | Default |
- | ----------------------------------------------- | ------------------------------------------------------------ | -------- | --------------------- |
- | ***Common parameters for node and controller*** | | | |
- | CSI_ENDPOINT | Specifies the HTTP endpoint for Unity. | No | /var/run/csi/csi.sock |
- | X_CSI_UNITY_ALLOW_MULTI_POD_ACCESS | Flag to enable multiple pods use the same pvc on the same node with RWO access mode | No | false |
- | ***Controller parameters*** | | | |
- | X_CSI_MODE | Driver starting mode | No | controller |
- | X_CSI_UNITY_AUTOPROBE | To enable auto probing for driver | No | true |
- | X_CSI_HEALTH_MONITOR_ENABLED | Enable/Disable health monitor of CSI volumes from Controller plugin | No | |
- | ***Node parameters*** | | | |
- | X_CSI_MODE | Driver starting mode | No | node |
- | X_CSI_ISCSI_CHROOT | Path to which the driver will chroot before running any iscsi commands. | No | /noderoot |
- | X_CSI_HEALTH_MONITOR_ENABLED | Enable/Disable health monitor of CSI volumes from Node plugin | No | | |
-
-### Example CR for Unity
+ | Parameter | Description | Required | Default |
+ | ----------------------------------------------- | --------------------------------------------------------------------------- | -------- | --------------------- |
+ | ***Common parameters for node and controller*** | | | |
+ | CSI_ENDPOINT | Specifies the HTTP endpoint for Unity XT. | No | /var/run/csi/csi.sock |
+ | X_CSI_UNITY_ALLOW_MULTI_POD_ACCESS | Flag to enable multiple pods use same pvc on same node with RWO access mode | No | false |
+ | ***Controller parameters*** | | | |
+ | X_CSI_MODE | Driver starting mode | No | controller |
+ | X_CSI_UNITY_AUTOPROBE | To enable auto probing for driver | No | true |
+ | X_CSI_HEALTH_MONITOR_ENABLED | Enable/Disable health monitor of CSI volumes from Controller plugin | No | |
+ | ***Node parameters*** | | | |
+ | X_CSI_MODE | Driver starting mode | No | node |
+ | X_CSI_ISCSI_CHROOT | Path to which the driver will chroot before running any iscsi commands | No | /noderoot |
+ | X_CSI_HEALTH_MONITOR_ENABLED | Enable/Disable health monitor of CSI volumes from Node plugin | No | |
+
+### Example CR for Unity XT
Refer samples from [here](https://github.com/dell/dell-csi-operator/tree/master/samples). Below is an example CR:
```yaml
apiVersion: storage.dell.com/v1
@@ -97,12 +97,12 @@ metadata:
namespace: test-unity
spec:
driver:
- configVersion: v2.2.0
+ configVersion: v2.3.0
replicas: 2
dnsPolicy: ClusterFirstWithHostNet
forceUpdate: false
common:
- image: "dellemc/csi-unity:v2.2.0"
+ image: "dellemc/csi-unity:v2.3.0"
imagePullPolicy: IfNotPresent
sideCars:
- name: provisioner
@@ -115,8 +115,8 @@ spec:
controller:
envs:
- # X_CSI_ENABLE_VOL_HEALTH_MONITOR: Enable/Disable health monitor of CSI volumes from Controller plugin. Provides details of volume status and volume condition.
- # As a prerequisite, external-health-monitor sidecar section should be uncommented in samples which would install the sidecar
+ # X_CSI_HEALTH_MONITOR_ENABLED: Enable/Disable health monitor of CSI volumes from Controller plugin - volume condition.
+ # Install the 'external-health-monitor' sidecar accordingly.
# Allowed values:
# true: enable checking of health condition of CSI volumes
# false: disable checking of health condition of CSI volumes
@@ -130,16 +130,16 @@ spec:
# Leave as blank to consider all nodes
# Allowed values: map of key-value pairs
# Default value: None
- # Examples:
- # node-role.kubernetes.io/master: ""
nodeSelector:
- # node-role.kubernetes.io/master: ""
+ # Uncomment if nodes you wish to use have the node-role.kubernetes. io/control-plane taint
+ # node-role.kubernetes.io/control-plane: ""
# tolerations: Define tolerations for the controllers, if required.
# Leave as blank to install controller on worker nodes
# Default value: None
tolerations:
- # - key: "node-role.kubernetes.io/master"
+ # Uncomment if nodes you wish to use have the node-role.kubernetes.io/control-plane taint
+ # - key: "node-role.kubernetes.io/control-plane"
# operator: "Exists"
# effect: "NoSchedule"
@@ -158,18 +158,26 @@ spec:
# Leave as blank to consider all nodes
# Allowed values: map of key-value pairs
# Default value: None
- # Examples:
- # node-role.kubernetes.io/master: ""
nodeSelector:
- # node-role.kubernetes.io/master: ""
+ # Uncomment if nodes you wish to use have the node-role.kubernetes.io/control-plane taint
+ # node-role.kubernetes.io/control-plane: ""
- # tolerations: Define tolerations for the controllers, if required.
- # Leave as blank to install controller on worker nodes
+ # tolerations: Define tolerations for the node daemonset, if required.
# Default value: None
tolerations:
- # - key: "node-role.kubernetes.io/master"
+ # Uncomment if nodes you wish to use have the node-role.kubernetes.io/control-plane taint
+ # - key: "node-role.kubernetes.io/control-plane"
# operator: "Exists"
# effect: "NoSchedule"
+ # - key: "node.kubernetes.io/memory-pressure"
+ # operator: "Exists"
+ # effect: "NoExecute"
+ # - key: "node.kubernetes.io/disk-pressure"
+ # operator: "Exists"
+ # effect: "NoExecute"
+ # - key: "node.kubernetes.io/network-unavailable"
+ # operator: "Exists"
+ # effect: "NoExecute"
---
apiVersion: v1
@@ -188,8 +196,6 @@ data:
## Dynamic Logging Configuration
-This feature is introduced in CSI Driver for unity version 2.0.0.
-
### Operator based installation
As part of driver installation, a ConfigMap with the name `unity-config-params` is created using the manifest located in the sample file. This ConfigMap contains an attribute `CSI_LOG_LEVEL` which specifies the current log level of the CSI driver. To set the default/initial log level user can set this field during driver installation.
@@ -199,12 +205,12 @@ kubectl edit configmap -n unity unity-config-params
```
**Note** :
- 1. Prior to CSI Driver for unity version 2.0.0, the log level was allowed to be updated dynamically through `logLevel` attribute in the secret object.
+ 1. The log level is not allowed to be updated dynamically through `logLevel` attribute in the secret object.
2. "Kubelet config dir path" is not yet configurable in case of Operator based driver installation.
3. Also, snapshotter and resizer sidecars are not optional to choose, it comes default with Driver installation.
## Volume Health Monitoring
-This feature is introduced in CSI Driver for unity version 2.1.0.
+This feature is introduced in CSI Driver for Unity XT version v2.1.0.
### Operator based installation
diff --git a/content/v1/csidriver/installation/test/unity.md b/content/v1/csidriver/installation/test/unity.md
index 95998ad511..db32d53c98 100644
--- a/content/v1/csidriver/installation/test/unity.md
+++ b/content/v1/csidriver/installation/test/unity.md
@@ -1,10 +1,10 @@
---
-title: Test Unity CSI Driver
-linktitle: Unity
-description: Tests to validate Unity CSI Driver installation
+title: Test Unity XT CSI Driver
+linktitle: Unity XT
+description: Tests to validate Unity XT CSI Driver installation
---
-## Test deploying a simple Pod and Pvc with Unity storage
+## Test deploying a simple Pod and Pvc with Unity XT storage
In the repository, a simple test manifest exists that creates three different PersistentVolumeClaims using default NFS and iSCSI and FC storage classes and automatically mounts them to the pod.
**Steps**
@@ -30,7 +30,7 @@ You can find all the created resources in `test-unity` namespace.
## Support for SLES 15 SP2
-The CSI Driver for Dell Unity requires the following set of packages installed on all worker nodes that run on SLES 15 SP2.
+The CSI Driver for Dell Unity XT requires the following set of packages installed on all worker nodes that run on SLES 15 SP2.
- open-iscsi **open-iscsi is required in order to make use of iSCSI protocol for provisioning**
- nfs-utils **nfs-utils is required in order to make use of NFS protocol for provisioning**
diff --git a/content/v1/csidriver/partners/operator.md b/content/v1/csidriver/partners/operator.md
index d60c9e6459..1b4a5fffd2 100644
--- a/content/v1/csidriver/partners/operator.md
+++ b/content/v1/csidriver/partners/operator.md
@@ -12,7 +12,7 @@ Users can install the Dell CSI Operator via [Operatorhub.io](https://operatorhub
![](../ophub1.png)
-2. Click DellEMC Operator.
+2. Click Dell Operator.
![](../ophub2.png)
diff --git a/content/v1/csidriver/partners/tanzu.md b/content/v1/csidriver/partners/tanzu.md
index 393f5b398f..33c7aafeaa 100644
--- a/content/v1/csidriver/partners/tanzu.md
+++ b/content/v1/csidriver/partners/tanzu.md
@@ -3,7 +3,7 @@ title: "VMware Tanzu"
Description: "About VMware Tanzu basic"
---
-The CSI Driver for Dell Unity and PowerScale supports VMware Tanzu and deployment of these Tanzu clusters is done using the VMware Tanzu supervisor cluster and supervisor namespace.
+The CSI Driver for Dell Unity XT, PowerScale and PowerStore supports VMware Tanzu. The deployment of these Tanzu clusters is done using the VMware Tanzu supervisor cluster and the supervisor namespace.
Currently, VMware Tanzu with normal configuration(without NAT) supports Kubernetes 1.20 and higher.
The CSI driver can be installed on this cluster using Helm. Installation of CSI drivers in Tanzu via Operator has not been qualified.
diff --git a/content/v1/csidriver/release/operator.md b/content/v1/csidriver/release/operator.md
index 4451adff9d..9696d83067 100644
--- a/content/v1/csidriver/release/operator.md
+++ b/content/v1/csidriver/release/operator.md
@@ -3,13 +3,14 @@ title: Operator
description: Release notes for Dell CSI Operator
---
-## Release Notes - Dell CSI Operator 1.7.0
+## Release Notes - Dell CSI Operator 1.8.0
->**Note:** There will be a delay in certification of Dell CSI Operator 1.7.0 and it will not be available for download from the Red Hat OpenShift certified catalog right away. The operator will still be available for download from the Red Hat OpenShift Community Catalog soon after the 1.7.0 release.
+>**Note:** There will be a delay in certification of Dell CSI Operator 1.8.0 and it will not be available for download from the Red Hat OpenShift certified catalog right away. The operator will still be available for download from the Red Hat OpenShift Community Catalog soon after the 1.8.0 release.
### New Features/Changes
-- Added support for Kubernetes 1.23.
+- Added support for Kubernetes 1.24.
+- Added support for OpenShift 4.10.
### Fixed Issues
There are no fixed issues in this release.
diff --git a/content/v1/csidriver/release/powerflex.md b/content/v1/csidriver/release/powerflex.md
index eabc638190..b77837c82e 100644
--- a/content/v1/csidriver/release/powerflex.md
+++ b/content/v1/csidriver/release/powerflex.md
@@ -3,21 +3,23 @@ title: PowerFlex
description: Release notes for PowerFlex CSI driver
---
-## Release Notes - CSI PowerFlex v2.2.0
+## Release Notes - CSI PowerFlex v2.4.0
### New Features/Changes
-- Added support for Kubernetes 1.23.
-- Added support for Amazon Elastic Kubernetes Service Anywhere.
+- Added InstallationID annotation for volume attributes.
+- Added optional parameter protectionDomain to storageclass.
+- RHEL 8.6 support added
### Fixed Issues
-There are no fixed issues in this release.
+- Enhancements to volume group snapshotter.
### Known Issues
| Issue | Workaround |
|-------|------------|
| Delete namespace that has PVCs and pods created with the driver. The External health monitor sidecar crashes as a result of this operation.| Deleting the namespace deletes the PVCs first and then removes the pods in the namespace. This brings a condition where pods exist without their PVCs and causes the external-health-monitor sidecar to crash. This is a known issue and has been reported at https://github.com/kubernetes-csi/external-health-monitor/issues/100|
+| When a node goes down, the block volumes attached to the node cannot be attached to another node | This is a known issue and has been reported at https://github.com/kubernetes-csi/external-attacher/issues/215. Workaround:
1. Force delete the pod running on the node that went down
2. Delete the volumeattachment to the node that went down.
Now the volume can be attached to the new node. |
### Note:
diff --git a/content/v1/csidriver/release/powermax.md b/content/v1/csidriver/release/powermax.md
index 5739dd04ee..20163037c0 100644
--- a/content/v1/csidriver/release/powermax.md
+++ b/content/v1/csidriver/release/powermax.md
@@ -3,12 +3,19 @@ title: PowerMax
description: Release notes for PowerMax CSI driver
---
-## Release Notes - CSI PowerMax v2.2.0
+## Release Notes - CSI PowerMax v2.3.0
### New Features/Changes
-- Added support for new access modes in CSI Spec 1.5.
-- Added support for Volume Health Monitoring.
-- Added support for Kubernetes 1.23.
+- Updated deprecated StorageClass parameter fsType with csi.storage.k8s.io/fstype.
+- Added support for Standalone Helm Charts.
+- Removed beta volumesnapshotclass sample files.
+- Added mapping of PV/PVC to namespace.
+- Added support to configure fsGroupPolicy.
+- Added support to filter topology keys based on user inputs.
+- Added support for SRDF Metro group sharing multiple namespaces.
+- Added support for Kubernetes 1.24.
+- Added support for OpenShift 4.10.
+- Added support to convert replicated volume to non-replicated volume and vice versa for Sync and Async modes.
### Fixed Issues
There are no fixed issues in this release.
@@ -21,8 +28,9 @@ There are no fixed issues in this release.
| Getting initiators list fails with context deadline error | The following error can occur during the driver installation if a large number of initiators are present on the array. There is no workaround for this but it can be avoided by deleting stale initiators on the array|
| Unable to update Host: A problem occurred modifying the host resource | This issue occurs when the nodes do not have unique hostnames or when an IP address/FQDN with same sub-domains are used as hostnames. The workaround is to use unique hostnames or FQDN with unique sub-domains|
| GetSnapVolumeList fails with context deadline error | The following error can occur if a large number of snapshots are present on the array. There is no workaround for this but it can be avoided by deleting unused snapshots on the array|
+| When a node goes down, the block volumes attached to the node cannot be attached to another node | This is a known issue and has been reported at https://github.com/kubernetes-csi/external-attacher/issues/215. Workaround:
1. Force delete the pod running on the node that went down
2. Delete the volumeattachment to the node that went down.
Now the volume can be attached to the new node |
+| After expanding file system volume , new size is not getting reflected inside the container | This is a known issue and has been reported at https://github.com/dell/csm/issues/378 . Workaround : Remount the volumes
1. Edit the replica count as 0 in application StatefulSet
2. Change the replica count as 1 for same StatefulSet. |
### Note:
-- Support for Kubernetes alpha features like Volume Health Monitoring and RWOP (ReadWriteOncePod) access mode introduced in the release will not be available in Openshift environment as Openshift doesn't support enabling of alpha features for Production Grade clusters.
-- Expansion of volumes and cloning of volumes are not supported for replicated volumes.
+- Support for Kubernetes alpha features like Volume Health Monitoring and RWOP (ReadWriteOncePod) access mode will not be available in Openshift environment as Openshift doesn't support enabling of alpha features for Production Grade clusters.
diff --git a/content/v1/csidriver/release/powerscale.md b/content/v1/csidriver/release/powerscale.md
index ff2a38a5eb..1a14c62bb6 100644
--- a/content/v1/csidriver/release/powerscale.md
+++ b/content/v1/csidriver/release/powerscale.md
@@ -3,18 +3,19 @@ title: PowerScale
description: Release notes for PowerScale CSI driver
---
-## Release Notes - CSI Driver for PowerScale v2.2.0
+## Release Notes - CSI Driver for PowerScale v2.3.0
### New Features/Changes
-- Added support for Replication.
-- Added support for Kubernetes 1.23.
-- Added support to configure fsGroupPolicy.
-- Added support for session based authentication along with basic authentication for PowerScale.
+- Removed beta volumesnapshotclass sample files.
+- Added support for Kubernetes 1.24.
+- Added support to increase volume path limit.
+- Added support for OpenShift 4.10.
+- Added support for CSM Resiliency sidecar via Helm.
### Fixed Issues
-- CSI Driver installation fails with the error message "error getting FQDN".
+There are no fixed issues in this release.
### Known Issues
| Issue | Resolution or workaround, if known |
diff --git a/content/v1/csidriver/release/powerstore.md b/content/v1/csidriver/release/powerstore.md
index c624c9c509..f0bbb59e8a 100644
--- a/content/v1/csidriver/release/powerstore.md
+++ b/content/v1/csidriver/release/powerstore.md
@@ -3,14 +3,16 @@ title: PowerStore
description: Release notes for PowerStore CSI driver
---
-## Release Notes - CSI PowerStore v2.2.0
+## Release Notes - CSI PowerStore v2.3.0
### New Features/Changes
-- Added support for NVMe/TCP protocol.
-- Added support for Kubernetes 1.23.
-- Added support to configure fsGroupPolicy.
-- Added support for configuring permissions using POSIX mode bits and NFSv4 ACLs on NFS mount directory.
+- Support Volume Group Snapshots.
+- Removed beta volumesnapshotclass sample files.
+- Support Configurable Volume Attributes.
+- Added support for Kubernetes 1.24.
+- Added support for OpenShift 4.10.
+- Added support for NVMe/FC protocol.
### Fixed Issues
@@ -22,6 +24,8 @@ There are no fixed issues in this release.
|--------------------------------------------------------------------------------------------------------------------------------------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| Delete namespace that has PVCs and pods created with the driver. The External health monitor sidecar crashes as a result of this operation | Deleting the namespace deletes the PVCs first and then removes the pods in the namespace. This brings a condition where pods exist without their PVCs and causes the external-health-monitor sidecar to crash. This is a known issue and has been reported at https://github.com/kubernetes-csi/external-health-monitor/issues/100
|
| fsGroupPolicy may not work as expected without root privileges for NFS only
https://github.com/kubernetes/examples/issues/260 | To get the desired behavior set "allowRoot: "true" in the storage class parameter |
+| If the NVMeFC pod is not getting created and the host looses the ssh connection, causing the driver pods to go to error state | remove the nvme_tcp module from the host incase of NVMeFC connection |
+| When a node goes down, the block volumes attached to the node cannot be attached to another node | This is a known issue and has been reported at https://github.com/kubernetes-csi/external-attacher/issues/215. Workaround:
1. Force delete the pod running on the node that went down
2. Delete the volumeattachment to the node that went down.
Now the volume can be attached to the new node. |
### Note:
diff --git a/content/v1/csidriver/release/unity.md b/content/v1/csidriver/release/unity.md
index 87517e3703..701d0778d4 100644
--- a/content/v1/csidriver/release/unity.md
+++ b/content/v1/csidriver/release/unity.md
@@ -1,25 +1,27 @@
---
-title: Unity
-description: Release notes for Unity CSI driver
+title: Unity XT
+description: Release notes for Unity XT CSI driver
---
-## Release Notes - CSI Unity v2.2.0
+## Release Notes - CSI Unity XT v2.3.0
### New Features/Changes
-- Added support for Kubernetes 1.23.
-- Added support for Standalone Helm Charts.
+- Removed beta volumesnapshotclass sample files.
+- Added support for Kubernetes 1.24.
+- Added support for OpenShift 4.10.
### Fixed Issues
-
+CSM Resiliency: Occasional failure unmounting Unity volume for raw block devices via iSCSI.
### Known Issues
| Issue | Workaround |
|-------|------------|
| Topology-related node labels are not removed automatically. | Currently, when the driver is uninstalled, topology-related node labels are not getting removed automatically. There is an open issue in the Kubernetes to fix this. Until the fix is released, remove the labels manually after the driver un-installation using command **kubectl label node - - ...** Example: **kubectl label node csi-unity.dellemc.com/array123-iscsi-** Note: there must be - at the end of each label to remove it.|
-| NFS Clone - Resize of the snapshot is not supported by Unity Platform.| Currently, when the driver takes a clone of NFS volume, it succeeds. But when the user tries to resize the NFS volumesnapshot, the driver will throw an error. The user should never try to resize the cloned NFS volume.|
+| NFS Clone - Resize of the snapshot is not supported by Unity XT Platform, however the user should never try to resize the cloned NFS volume.| Currently, when the driver takes a clone of NFS volume, it succeeds but if the user tries to resize the NFS volumesnapshot, the driver will throw an error.|
| Delete namespace that has PVCs and pods created with the driver. The External health monitor sidecar crashes as a result of this operation.| Deleting the namespace deletes the PVCs first and then removes the pods in the namespace. This brings a condition where pods exist without their PVCs and causes the external-health-monitor sidecar to crash. This is a known issue and has been reported at https://github.com/kubernetes-csi/external-health-monitor/issues/100|
+| When a node goes down, the block volumes attached to the node cannot be attached to another node | This is a known issue and has been reported at https://github.com/kubernetes-csi/external-attacher/issues/215. Workaround:
1. Force delete the pod running on the node that went down
2. Delete the VolumeAttachment to the node that went down.
Now the volume can be attached to the new node. |
### Note:
diff --git a/content/v1/csidriver/troubleshooting/powerflex.md b/content/v1/csidriver/troubleshooting/powerflex.md
index 5699c2ec98..373605cc8e 100644
--- a/content/v1/csidriver/troubleshooting/powerflex.md
+++ b/content/v1/csidriver/troubleshooting/powerflex.md
@@ -20,6 +20,8 @@ description: Troubleshooting PowerFlex Driver
| The controller pod is stuck and producing errors such as" `Failed to watch *v1.VolumeSnapshotContent: failed to list *v1.VolumeSnapshotContent: the server could not find the requested resource (get volumesnapshotcontents.snapshot.storage.k8s.io)` | Make sure that v1 snapshotter CRDs and v1 snapclass are installed, and not v1beta1, which is no longer supported. |
| Driver install or upgrade fails because of an incompatible Kubernetes version, even though the version seems to be within the range of compatibility. For example: `Error: UPGRADE FAILED: chart requires kubeVersion: >= 1.21.0 <= 1.23.0 which is incompatible with Kubernetes V1.21.11-mirantis-1` | If you are using an extended Kubernetes version, please see the helm Chart at `helm/csi-vxflexos/Chart.yaml` and use the alternate `kubeVersion` check that is provided in the comments. *Please note* that this is not meant to be used to enable the use of pre-release alpha and beta versions, which is not supported. |
| Volume metrics are missing | Enable [Volume Health Monitoring](../../features/powerflex#volume-health-monitoring) |
+| When a node goes down, the block volumes attached to the node cannot be attached to another node | This is a known issue and has been reported at https://github.com/kubernetes-csi/external-attacher/issues/215. Workaround:
1. Force delete the pod running on the node that went down
2. Delete the volumeattachment to the node that went down.
Now the volume can be attached to the new node. |
+| CSI-PowerFlex volumes cannot mount; are being recognized as multipath devices | CSI-PowerFlex does not support multipath; to fix:
1. Remove any multipath mapping involving a powerflex volume with `multipath -f `
2. Blacklist CSI-PowerFlex volumes in multipath config file |
>*Note*: `vxflexos-controller-*` is the controller pod that acquires leader lease
diff --git a/content/v1/csidriver/troubleshooting/powermax.md b/content/v1/csidriver/troubleshooting/powermax.md
index e1e7587300..76cc3d4b23 100644
--- a/content/v1/csidriver/troubleshooting/powermax.md
+++ b/content/v1/csidriver/troubleshooting/powermax.md
@@ -9,3 +9,5 @@ description: Troubleshooting PowerMax Driver
| `kubectl describe pod powermax-controller- –n ` indicates that the driver image could not be loaded | You may need to put an insecure-registries entry in `/etc/docker/daemon.json` or log in to the docker registry |
| `kubectl logs powermax-controller- –n driver` logs show that the driver cannot authenticate | Check your secret’s username and password |
| `kubectl logs powermax-controller- –n driver` logs show that the driver failed to connect to the U4P because it could not verify the certificates | Check the powermax-certs secret and ensure it is not empty or it has the valid certificates|
+|Driver install or upgrade fails because of an incompatible Kubernetes version, even though the version seems to be within the range of compatibility. For example: Error: UPGRADE FAILED: chart requires kubeVersion: >= 1.22.0 < 1.25.0 which is incompatible with Kubernetes V1.22.11-mirantis-1 | If you are using an extended Kubernetes version, please see the [helm Chart](https://github.com/dell/csi-powermax/blob/main/helm/csi-powermax/Chart.yaml) and use the alternate kubeVersion check that is provided in the comments. Please note that this is not meant to be used to enable the use of pre-release alpha and beta versions, which is not supported.|
+| When a node goes down, the block volumes attached to the node cannot be attached to another node | 1. Force delete the pod running on the node that went down
2. Delete the volumeattachment to the node that went down.
Now the volume can be attached to the new node. |
diff --git a/content/v1/csidriver/troubleshooting/powerstore.md b/content/v1/csidriver/troubleshooting/powerstore.md
index 2de1b8de02..62c1622262 100644
--- a/content/v1/csidriver/troubleshooting/powerstore.md
+++ b/content/v1/csidriver/troubleshooting/powerstore.md
@@ -9,3 +9,6 @@ description: Troubleshooting PowerStore Driver
| The `kubectl logs -n csi-powerstore powerstore-node-` driver logs show that the driver can't connect to PowerStore API. | Check if you've created a secret with correct credentials |
|Installation of the driver on Kubernetes supported versions fails with the following error:
```Error: unable to build kubernetes objects from release manifest: unable to recognize "": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"```|Kubernetes v1.21/v1.22/v1.23 requires v1 version of snapshot CRDs to be created in cluster, see the [Volume Snapshot Requirements](../../installation/helm/powerstore/#optional-volume-snapshot-requirements)|
| If PVC is not getting created and getting the following error in PVC description:
```failed to provision volume with StorageClass "powerstore-iscsi": rpc error: code = Internal desc = : Unknown error:```| Check if you've created a secret with correct credentials |
+| If the NVMeFC pod is not getting created and the host looses the ssh connection, causing the driver pods to go to error state | remove the nvme_tcp module from the host incase of NVMeFC connection |
+| When a node goes down, the block volumes attached to the node cannot be attached to another node | 1. Force delete the pod running on the node that went down
2. Delete the volumeattachment to the node that went down.
Now the volume can be attached to the new node. |
+| If the pod creation for NVMe takes time when the connections between the host and the array are more than 2 and considerable volumes are mounted on the host | Reduce the number of connections between the host and the array to 2. |
\ No newline at end of file
diff --git a/content/v1/csidriver/troubleshooting/unity.md b/content/v1/csidriver/troubleshooting/unity.md
index 447b218737..9905215390 100644
--- a/content/v1/csidriver/troubleshooting/unity.md
+++ b/content/v1/csidriver/troubleshooting/unity.md
@@ -1,16 +1,16 @@
---
-title: Unity
-description: Troubleshooting Unity Driver
+title: Unity XT
+description: Troubleshooting Unity XT Driver
---
---
| Symptoms | Prevention, Resolution or Workaround |
| --- | --- |
| When you run the command `kubectl describe pods unity-controller- –n unity`, the system indicates that the driver image could not be loaded. | You may need to put an insecure-registries entry in `/etc/docker/daemon.json` or login to the docker registry |
-| The `kubectl logs -n unity unity-node-` driver logs show that the driver can't connect to Unity - Authentication failure. | Check if you have created a secret with correct credentials |
+| The `kubectl logs -n unity unity-node-` driver logs show that the driver can't connect to Unity XT - Authentication failure. | Check if you have created a secret with correct credentials |
| `fsGroup` specified in pod spec is not reflected in files or directories at mounted path of volume. | fsType of PVC must be set for fsGroup to work. fsType can be specified while creating a storage class. For NFS protocol, fsType can be specified as `nfs`. fsGroup doesn't work for ephemeral inline volumes. |
| Dynamic array detection will not work in Topology based environment | Whenever a new array is added or removed, then the driver controller and node pod should be restarted with command **kubectl get pods -n unity --no-headers=true \| awk '/unity-/{print $1}'\| xargs kubectl delete -n unity pod** when **topology-based storage classes are used**. For dynamic array addition without topology, the driver will detect the newly added or removed arrays automatically|
| If source PVC is deleted when cloned PVC exists, then source PVC will be deleted in the cluster but on array, it will still be present and marked for deletion. | All the cloned PVC should be deleted in order to delete the source PVC from the array. |
| PVC creation fails on a fresh cluster with **iSCSI** and **NFS** protocols alone enabled with error **failed to provision volume with StorageClass "unity-iscsi": error generating accessibility requirements: no available topology found**. | This is because iSCSI initiator login takes longer than the node pod startup time. This can be overcome by bouncing the node pods in the cluster using the below command the driver pods with **kubectl get pods -n unity --no-headers=true \| awk '/unity-/{print $1}'\| xargs kubectl delete -n unity pod** |
-| Driver install or upgrade fails because of an incompatible Kubernetes version, even though the version seems to be within the range of compatibility. For example: `Error: UPGRADE FAILED: chart requires kubeVersion: >= 1.21.0 <= 1.23.0 which is incompatible with Kubernetes V1.21.11-mirantis-1` | If you are using an extended Kubernetes version, please see the helm Chart at `helm/csi-unity/Chart.yaml` and use the alternate `kubeVersion` check that is provided in the comments. *Please note* that this is not meant to be used to enable the use of pre-release alpha and beta versions, which is not supported. |
-
+| Driver install or upgrade fails because of an incompatible Kubernetes version, even though the version seems to be within the range of compatibility. For example: `Error: UPGRADE FAILED: chart requires kubeVersion: >= 1.21.0 < 1.25.0 which is incompatible with Kubernetes V1.21.11-mirantis-1` | If you are using an extended Kubernetes version, please see the helm Chart at `helm/csi-unity/Chart.yaml` and use the alternate `kubeVersion` check that is provided in the comments. *Please note* that this is not meant to be used to enable the use of pre-release alpha and beta versions, which is not supported. |
+| When a node goes down, the block volumes attached to the node cannot be attached to another node | 1. Force delete the pod running on the node that went down
2. Delete the VolumeAttachment to the node that went down.
Now the volume can be attached to the new node. |
diff --git a/content/v1/csidriver/upgradation/drivers/isilon.md b/content/v1/csidriver/upgradation/drivers/isilon.md
index e473a299e4..75fca2acda 100644
--- a/content/v1/csidriver/upgradation/drivers/isilon.md
+++ b/content/v1/csidriver/upgradation/drivers/isilon.md
@@ -8,12 +8,12 @@ Description: Upgrade PowerScale CSI driver
---
You can upgrade the CSI Driver for Dell PowerScale using Helm or Dell CSI Operator.
-## Upgrade Driver from version 2.1.0 to 2.2.0 using Helm
+## Upgrade Driver from version 2.2.0 to 2.3.0 using Helm
**Note:** While upgrading the driver via helm, controllerCount variable in myvalues.yaml can be at most one less than the number of worker nodes.
**Steps**
-1. Clone the repository using `git clone -b v2.2.0 https://github.com/dell/csi-powerscale.git`, copy the helm/csi-isilon/values.yaml into a new location with a custom name say _my-isilon-settings.yaml_, to customize settings for installation. Edit _my-isilon-settings.yaml_ as per the requirements.
+1. Clone the repository using `git clone -b v2.3.0 https://github.com/dell/csi-powerscale.git`, copy the helm/csi-isilon/values.yaml into a new location with a custom name say _my-isilon-settings.yaml_, to customize settings for installation. Edit _my-isilon-settings.yaml_ as per the requirements.
2. Change to directory dell-csi-helm-installer to install the Dell PowerScale `cd dell-csi-helm-installer`
3. Upgrade the CSI Driver for Dell PowerScale using following command:
diff --git a/content/v1/csidriver/upgradation/drivers/operator.md b/content/v1/csidriver/upgradation/drivers/operator.md
index d3f9b22a5b..eab8bedd28 100644
--- a/content/v1/csidriver/upgradation/drivers/operator.md
+++ b/content/v1/csidriver/upgradation/drivers/operator.md
@@ -13,7 +13,7 @@ Dell CSI Operator can be upgraded based on the supported platforms in one of the
### Using Installation Script
-1. Clone and checkout the required dell-csi-operator version using `git clone -b v1.7.0 https://github.com/dell/dell-csi-operator.git`.
+1. Clone and checkout the required dell-csi-operator version using `git clone -b v1.8.0 https://github.com/dell/dell-csi-operator.git`.
2. cd dell-csi-operator
3. Execute `bash scripts/install.sh --upgrade` . This command will install the latest version of the operator.
>Note: Dell CSI Operator version 1.4.0 and higher would install to the 'dell-csi-operator' namespace by default.
diff --git a/content/v1/csidriver/upgradation/drivers/powerflex.md b/content/v1/csidriver/upgradation/drivers/powerflex.md
index 0611b63233..5c181f183e 100644
--- a/content/v1/csidriver/upgradation/drivers/powerflex.md
+++ b/content/v1/csidriver/upgradation/drivers/powerflex.md
@@ -10,12 +10,11 @@ Description: Upgrade PowerFlex CSI driver
You can upgrade the CSI Driver for Dell PowerFlex using Helm or Dell CSI Operator.
-## Update Driver from v2.1 to v2.2 using Helm
+## Update Driver from v2.2 to v2.3 using Helm
**Steps**
-1. Run `git clone -b v2.2.0 https://github.com/dell/csi-powerflex.git` to clone the git repository and get the v2.2.0 driver.
+1. Run `git clone -b v2.3.0 https://github.com/dell/csi-powerflex.git` to clone the git repository and get the v2.3.0 driver.
2. You need to create config.yaml with the configuration of your system.
Check this section in installation documentation: [Install the Driver](../../../installation/helm/powerflex#install-the-driver)
- You must set the only system managed in v1.5/v2.0/v2.1 driver as default in config.json in v2.2 so that the driver knows the existing volumes belong to that system.
3. Update values file as needed.
4. Run the `csi-install` script with the option _\-\-upgrade_ by running: `cd ../dell-csi-helm-installer && ./csi-install.sh --namespace vxflexos --values ./myvalues.yaml --upgrade`.
diff --git a/content/v1/csidriver/upgradation/drivers/powermax.md b/content/v1/csidriver/upgradation/drivers/powermax.md
index 1f2ba76421..98e1fd3059 100644
--- a/content/v1/csidriver/upgradation/drivers/powermax.md
+++ b/content/v1/csidriver/upgradation/drivers/powermax.md
@@ -10,10 +10,10 @@ Description: Upgrade PowerMax CSI driver
You can upgrade CSI Driver for Dell PowerMax using Helm or Dell CSI Operator.
-## Update Driver from v2.1 to v2.2 using Helm
+## Update Driver from v2.2 to v2.3 using Helm
**Steps**
-1. Run `git clone -b v2.2.0 https://github.com/dell/csi-powermax.git` to clone the git repository and get the v2.2 driver.
+1. Run `git clone -b v2.3.0 https://github.com/dell/csi-powermax.git` to clone the git repository and get the v2.3 driver.
2. Update the values file as needed.
2. Run the `csi-install` script with the option _\-\-upgrade_ by running: `cd ../dell-csi-helm-installer && ./csi-install.sh --namespace powermax --values ./my-powermax-settings.yaml --upgrade`.
diff --git a/content/v1/csidriver/upgradation/drivers/powerstore.md b/content/v1/csidriver/upgradation/drivers/powerstore.md
index 7f5152bd3f..089fa38c68 100644
--- a/content/v1/csidriver/upgradation/drivers/powerstore.md
+++ b/content/v1/csidriver/upgradation/drivers/powerstore.md
@@ -9,12 +9,12 @@ Description: Upgrade PowerStore CSI driver
You can upgrade the CSI Driver for Dell PowerStore using Helm or Dell CSI Operator.
-## Update Driver from v2.1 to v2.2 using Helm
+## Update Driver from v2.2 to v2.3 using Helm
Note: While upgrading the driver via helm, controllerCount variable in myvalues.yaml can be at most one less than the number of worker nodes.
**Steps**
-1. Run `git clone -b v2.2.0 https://github.com/dell/csi-powerstore.git` to clone the git repository and get the driver.
+1. Run `git clone -b v2.3.0 https://github.com/dell/csi-powerstore.git` to clone the git repository and get the driver.
2. Edit `helm/config.yaml` file and configure connection information for your PowerStore arrays changing the following parameters:
- *endpoint*: defines the full URL path to the PowerStore API.
- *globalID*: specifies what storage cluster the driver should use
diff --git a/content/v1/csidriver/upgradation/drivers/unity.md b/content/v1/csidriver/upgradation/drivers/unity.md
index 23ee1340e1..26b4e4d47d 100644
--- a/content/v1/csidriver/upgradation/drivers/unity.md
+++ b/content/v1/csidriver/upgradation/drivers/unity.md
@@ -1,13 +1,13 @@
---
-title: "Unity"
+title: "Unity XT"
tags:
- upgrade
- csi-driver
weight: 1
-Description: Upgrade Unity CSI driver
+Description: Upgrade Unity XT CSI driver
---
-You can upgrade the CSI Driver for Dell Unity using Helm or Dell CSI Operator.
+You can upgrade the CSI Driver for Dell Unity XT using Helm or Dell CSI Operator.
**Note:**
1. User has to re-create existing custom-storage classes (if any) according to the latest format.
@@ -20,13 +20,12 @@ You can upgrade the CSI Driver for Dell Unity using Helm or Dell CSI Operator.
Preparing myvalues.yaml is the same as explained in the install section.
-To upgrade the driver from csi-unity v2.1 to csi-unity 2.2
+To upgrade the driver from csi-unity v2.2.0 to csi-unity v2.3.0
-1. Get the latest csi-unity 2.2 code from Github using using `git clone -b v2.2.0 https://github.com/dell/csi-unity.git`.
-2. Create myvalues.yaml.
-3. Copy the helm/csi-unity/values.yaml to the new location csi-unity/dell-csi-helm-installer with name say myvalues.yaml, to customize settings for installation edit myvalues.yaml as per the requirements.
-4. Navigate to common-helm-installer folder and execute the following command:
- `./csi-install.sh --namespace unity --values ./myvalues.yaml --upgrade`
+1. Get the latest csi-unity v2.3.0 code from Github using using `git clone -b v2.3.0 https://github.com/dell/csi-unity.git`.
+2. Copy the helm/csi-unity/values.yaml to the new location csi-unity/dell-csi-helm-installer and rename it to myvalues.yaml. Customize settings for installation by editing myvalues.yaml as needed.
+3. Navigate to csi-unity/dell-csi-hem-installer folder and execute this command:
+ `./csi-install.sh --namespace unity --values ./myvalues.yaml --upgrade`
### Using Operator
diff --git a/content/v1/deployment/csmapi.md b/content/v1/deployment/csmapi.md
deleted file mode 100644
index 812f36b835..0000000000
--- a/content/v1/deployment/csmapi.md
+++ /dev/null
@@ -1,8 +0,0 @@
----
-title: "CSM REST API"
-type: swagger
-weight: 1
-description: Reference for the CSM REST API
----
-
-{{< swaggerui src="../swagger.yaml" >}}
\ No newline at end of file
diff --git a/content/v1/deployment/csmcli.md b/content/v1/deployment/csmcli.md
deleted file mode 100644
index 25ef2d7e43..0000000000
--- a/content/v1/deployment/csmcli.md
+++ /dev/null
@@ -1,269 +0,0 @@
----
-title : CSM CLI
-linktitle: CSM CLI
-weight: 3
-description: >
- Dell EMC Container Storage Modules (CSM) Command Line Interface(CLI) Deployment and Management
----
-`csm` is a command-line client for installation of Dell EMC Container Storage Modules and CSI Drivers for Kubernetes clusters.
-
-## Pre-requisites
-
-1. [Deploy the Container Storage Modules Installer](../../deployment)
-2. Download/Install the `csm` binary from Github: https://github.com/dell/csm. Alternatively, you can build the binary by:
- - cloning the `csm` repository
- - changing into `csm/cmd/csm` directory
- - running `make build`
-3. create a `cli_env.sh` file that contains the correct values for the below variables. And export the variables by running `source ./cli_env.sh`
-
-```console
-# Change this to CSM API Server IP
-export API_SERVER_IP="127.0.0.1"
-
-# Change this to CSM API Server Port
-export API_SERVER_PORT="31313"
-
-# CSM API Server protocol - allowed values are https & http
-export SCHEME="https"
-
-# Path to store JWT
-export AUTH_CONFIG_PATH="/home/user/installer-token/"
-```
-
-## Usage
-
-```console
-~$ ./csm -h
-csm is command line tool for csm application
-
-Usage:
- csm [flags]
- csm [command]
-
-Available Commands:
- add add cluster, configuration or storage
- approve-task approve task for application
- authenticate authenticate user
- change change - subcommand is password
- create create application
- delete delete storage, cluster, configuration or application
- get get storage, cluster, application, configuration, supported driver, module, storage type
- help Help about any command
- reject-task reject task for an application
- update update storage, configuration or cluster
-
-Flags:
- -h, --help help for csm-cli
-
-Use "csm [command] --help" for more information about a command.
-```
-
-### Authenticate the User
-
-To begin with, you need to authenticate the user who will be managing the CSM Installer and its components.
-
-```console
-./csm authenticate --username= --password=
-```
-Or more securely, run the above command without `--password` to be prompted for one
-
-```console
-./csm authenticate --username=
-Enter user's password:
-
-```
-
-### Change Password
-
-To change password follow below command
-
-```console
-./csm change password --username=
-```
-
-### View Supported Platforms
-
-You can now view the supported Dell emcCSI Drivers
-
-```console
-./csm get supported-drivers
-```
-
-You can also view the supported Modules
-
-```console
-./csm get supported-modules
-```
-
-And also view the supported Storage Array Types
-
-```console
-./csm get supported-storage-arrays
-```
-
-### Add a Cluster
-
-You can now add a cluster by providing cluster detail name and Kubeconfig path
-
-```console
-./csm add cluster --clustername --configfilepath
-```
-
-### Upload Configuration Files
-
-You can now add a configuration file that can be used for creating application by providing filename and path
-
-```console
-./csm add configuration --filename --filepath
-```
-
-### Add a Storage System
-
-You can now add storage endpoints, array type and its unique id
-
-```console
-./csm add storage --endpoint --storage-type --unique-id --username
-```
-
-The optional `--meta-data` flag can be used to provide additional meta-data for the storage system that is used when creating Secrets for the CSI Driver. These fields include:
- - isDefault: Set to true if this storage system is used as default for multi-array configuration
- - skipCertificateValidation: Set to true to skip certificate validation
- - mdmId: Comma separated list of MDM IPs for PowerFlex
- - nasName: NAS Name for PowerStore
- - blockProtocol: Block Protocol for PowerStore
- - port: Port for PowerScale
- - portGroups: Comma separated list of port group names for PowerMax
-
-### Create an Application
-
-You may now create an application depending on the specific use case. Below are the common use cases:
-
-
- CSI Driver
-
-```console
-./csm create application --clustername \
- --driver-type powerflex: --name \
- --storage-arrays
-```
-
-
-
- CSI Driver with CSM Authorization
-
-CSM Authorization requires a `token.yaml` issued by storage Admin from the CSM Authorization Server, a certificate file, and the of the authorization server. The `token.yaml` and `cert` should be added by following the steps in [adding configuration file](#upload-configuration-files). CSM Authorization does not yet support all CSI Drivers/platforms(See [supported platforms documentation](../../authorization/#supported-platforms) or [supported platforms via CLI](#view-supported-platforms))).
-Finally, run the command below:
-
-```console
-./csm create application --clustername \
- --driver-type powerflex: --name \
- --storage-arrays \
- --module-type authorization: \
- --module-configuration "karaviAuthorizationProxy.proxyAuthzToken.filename=,karaviAuthorizationProxy.rootCertificate.filename=,karaviAuthorizationProxy.proxyHost="
-
-```
-
-
-
- CSM Observability(Standalone)
-
-CSM Observability depends on driver config secret(s) corresponding to the metric(s) you want to enable. Please see [CSM Observability](../../observability/metrics) for all Supported Metrics. For the sake of demonstration, assuming we want to enable [CSM Metrics for PowerFlex](../../observability/metrics/powerflex), the PowerFlex secret yaml should be added by following the steps in [adding configuration file](#upload-configuration-files).
-Once this is done, run the command below:
-
-```console
-./csm create application --clustername \
- --name \
- --module-type observability: \
- --module-configuration "karaviMetricsPowerflex.driverConfig.filename=,karaviMetricsPowerflex.enabled=true"
-```
-
-
-
- CSM Observability(Standalone) with CSM Authorization
-
-See the individual steps for configuaration file pre-requisites for CSM Observability (Standalone) with CSM Authorization
-
-```console
-./csm create application --clustername \
- --name \
- --module-type "observability:,authorization:" \
- --module-configuration "karaviMetricsPowerflex.driverConfig.filename=,karaviMetricsPowerflex.enabled=true,karaviAuthorizationProxy.proxyAuthzToken.filename=,karaviAuthorizationProxy.rootCertificate.filename=,karaviAuthorizationProxy.proxyHost="
-```
-
-
-
- CSI Driver for Dell EMC PowerMax with reverse proxy module
-
- To deploy CSI Driver for Dell EMC PowerMax with reverse proxy module, first upload reverse proxy tls crt and tls key via [adding configuration file](#upload-configuration-files). Then, use the below command to create application:
-
-```console
-./csm create application --clustername \
- --driver-type powermax: --name \
- --storage-arrays \
- --module-type reverse-proxy: \
- --module-configuration reverseProxy.tlsSecretKeyFile=,reverseProxy.tlsSecretCertFile=
-```
-
-
-
- CSI Driver with replication module
-
- To deploy CSI driver with replication module, first add a target cluster through [adding cluster](#add-a-cluster). Then, use the below command(this command is an example to deploy CSI Driver for Dell EMC PowerStore with replication module) to create application::
-
-```console
-./csm create application --clustername \
- --driver-type powerstore: --name \
- --storage-arrays \
- --module-configuration target_cluster= \
- --module-type replication:
-```
-
-
-
-
- CSI Driver with other module(s) not covered above
-
- Assuming you want to deploy a driver with `module A` and `module B`. If they have specific configurations of `A.image="docker:v1"`,`A.filename=hello`, and `B.namespace=world`.
-
-```console
-./csm create application --clustername \
- --driver-type powerflex: --name \
- --storage-arrays \
- --module-type "module A:,module B:" \
- --module-configuration "A.image=docker:v1,A.filename=hello,B.namespace=world"
-```
-
-
-
-> __Note__:
- - `--driver-type` and `--module-type` flags in create application command MUST match the values from the [supported CSM platforms](#view-supported-platforms)
- - Replication module supports only using a pair of clusters at a time (source and a target/or single cluster) from CSM installer, However `repctl` can be used if needed to add multiple pairs of target clusters. Using replication module with other modules during application creation is not yet supported.
-
-### Approve application/task
-
-You may now approve the task so that you can continue to work with the application
-
-```console
-./csm approve-task --applicationname
-```
-
-### Reject application/task
-
-You may want to reject a task or application to discontinue the ongoing process
-
-```console
-./csm reject-task --applicationname
-```
-
-### Delete application/task
-
-If you want to delete an application
-
-```console
-./csm delete application --name
-```
-
-> __Note__: When deleting an application, the namespace and Secrets are not deleted. These resources need to be deleted manually. See more in [Troubleshooting](../troubleshooting#after-deleting-an-application-why-cant-i-re-create-the-same-application).
-
-> __Note__: All commands and associated syntax can be displayed with -h or --help
-
diff --git a/content/v1/deployment/csminstaller/_index.md b/content/v1/deployment/csminstaller/_index.md
index 95ae36a236..4527ddfd9f 100644
--- a/content/v1/deployment/csminstaller/_index.md
+++ b/content/v1/deployment/csminstaller/_index.md
@@ -22,7 +22,7 @@ The CSM (Container Storage Modules) Installer simplifies the deployment and mana
| Replication | 1.0 |
| Resiliency | 1.0 |
| CSI Driver for PowerScale | v2.0 |
-| CSI Driver for Unity | v2.0 |
+| CSI Driver for Unity XT | v2.0 |
| CSI Driver for PowerStore | v2.0 |
| CSI Driver for PowerFlex | v2.0 |
| CSI Driver for PowerMax | v2.0 |
diff --git a/content/v1/deployment/csmoperator/_index.md b/content/v1/deployment/csmoperator/_index.md
index 702fab7871..c89d7e9d74 100644
--- a/content/v1/deployment/csmoperator/_index.md
+++ b/content/v1/deployment/csmoperator/_index.md
@@ -16,19 +16,19 @@ Dell CSM Operator has been tested and qualified on Upstream Kubernetes and OpenS
| Kubernetes Version | OpenShift Version |
| -------------------- | ------------------- |
-| 1.21, 1.22, 1.23 | 4.8, 4.9 |
+| 1.22, 1.23, 1.24 | 4.9, 4.10, 4.10 EUS |
## Supported CSI Drivers
| CSI Driver | Version | ConfigVersion |
| ------------------ | --------- | -------------- |
-| CSI PowerScale | 2.2.0 | v2.2.0 |
+| CSI PowerScale | 2.2.0 + | v2.2.0 + |
## Supported CSM Modules
| CSM Modules | Version | ConfigVersion |
| ------------------ | --------- | -------------- |
-| CSM Authorization | 1.2.0 | v1.2.0 |
+| CSM Authorization | 1.2.0 + | v1.2.0 + |
## Installation
Dell CSM Operator can be installed manually or via Operator Hub.
@@ -82,6 +82,30 @@ To uninstall a CSM operator installed with OLM run `bash scripts/uninstall_olm.s
{{< imgproc uninstall_olm.jpg Resize "2500x" >}}{{< /imgproc >}}
+### To upgrade Dell CSM Operator, perform the following steps.
+Dell CSM Operator can be upgraded in 2 ways:
+
+1.Using script (for non-OLM based installation)
+
+2.Using Operator Lifecycle Manager (OLM)
+
+#### Using Installation Script
+1. Clone the [Dell CSM Operator repository](https://github.com/dell/csm-operator).
+2. `cd csm-operator`
+3. git checkout -b 'csm-operator-version'
+4. Execute `bash scripts/install.sh --upgrade` . This command will install the latest version of the operator.
+
+>Note: Dell CSM Operator would install to the 'dell-csm-operator' namespace by default.
+
+#### Using OLM
+The upgrade of the Dell CSM Operator is done via Operator Lifecycle Manager.
+
+The `Update approval` (**`InstallPlan`** in OLM terms) strategy plays a role while upgrading dell-csm-operator on OpenShift. This option can be set during installation of dell-csm-operator on OpenShift via the console and can be either set to `Manual` or `Automatic`.
+- If the **`Update approval`** is set to `Automatic`, OpenShift automatically detects whenever the latest version of dell-csm-operator is available in the **`Operator hub`**, and upgrades it to the latest available version.
+- If the upgrade policy is set to `Manual`, OpenShift notifies of an available upgrade. This notification can be viewed by the user in the **`Installed Operators`** section of the OpenShift console. Clicking on the hyperlink to `Approve` the installation would trigger the dell-csm-operator upgrade process.
+
+**NOTE**: The recommended version of OLM for Upstream Kubernetes is **`v0.18.3`**.
+
### Custom Resource Definitions
As part of the Dell CSM Operator installation, a CRD representing configuration for the CSI Driver and CSM Modules is also installed.
`containerstoragemodule` CRD is installed in API Group `storage.dell.com`.
@@ -124,86 +148,3 @@ The specification for the Custom Resource is the same for all the drivers.Below
**nodeSelector** - Used to specify node selectors for the driver StatefulSet/Deployment and DaemonSet.
>**Note:** The `image` field should point to the correct image tag for version of the driver you are installing.
-
-### Pre-requisites for installation of the CSI Drivers
-
-On Upstream Kubernetes clusters, make sure to install
-* VolumeSnapshot CRDs - Install v1 VolumeSnapshot CRDs
-* External Volume Snapshot Controller
-
-#### Volume Snapshot CRD's
-The Kubernetes Volume Snapshot CRDs can be obtained and installed from the external-snapshotter project on Github. Manifests are available [here](https://github.com/kubernetes-csi/external-snapshotter/tree/v5.0.1/client/config/crd)
-
-#### Volume Snapshot Controller
-The CSI external-snapshotter sidecar is split into two controllers:
-- A common snapshot controller
-- A CSI external-snapshotter sidecar
-
-The common snapshot controller must be installed only once in the cluster irrespective of the number of CSI drivers installed in the cluster. On OpenShift clusters, the common snapshot-controller is pre-installed. In the clusters where it is not present, it can be installed using `kubectl` and the manifests are available [here](https://github.com/kubernetes-csi/external-snapshotter/tree/v5.0.1/deploy/kubernetes/snapshot-controller)
-
-*NOTE:*
-- The manifests available on GitHub install the snapshotter image:
- - [quay.io/k8scsi/csi-snapshotter:v5.0.1](https://quay.io/repository/k8scsi/csi-snapshotter?tag=v5.0.1&tab=tags)
-- The CSI external-snapshotter sidecar is still installed along with the driver and does not involve any extra configuration.
-
-#### Installation example
-
-You can install CRDs and the default snapshot controller by running the following commands:
-```bash
-git clone https://github.com/kubernetes-csi/external-snapshotter/
-cd ./external-snapshotter
-git checkout release-
-kubectl create -f client/config/crd
-kubectl create -f deploy/kubernetes/snapshot-controller
-```
-*NOTE:*
-- It is recommended to use 5.0.x version of snapshotter/snapshot-controller.
-
-## Installing CSI Driver via Operator
-
-Refer [PowerScale Driver](drivers/powerscale) to install the driver via Operator
-
->**Note**: If you are using an OLM based installation, example manifests are available in `OperatorHub` UI.
-You can edit these manifests and install the driver using the `OperatorHub` UI.
-
-### Verifying the driver installation
-Once the driver `Custom Resource (CR)` is created, you can verify the installation as mentioned below
-
-* Check if ContainerStorageModule CR is created successfully using the command below:
- ```
- $ kubectl get csm/ -n -o yaml
- ```
-* Check the status of the CR to verify if the driver installation is in the `Succeeded` state. If the status is not `Succeeded`, see the [Troubleshooting guide](./troubleshooting/#my-dell-csi-driver-install-failed-how-do-i-fix-it) for more information.
-
-
-### Update CSI Drivers
-The CSI Drivers and CSM Modules installed by the Dell CSM Operator can be updated like any Kubernetes resource. This can be achieved in various ways which include:
-
-* Modifying the installation directly via `kubectl edit`
- For e.g. - If the name of the installed PowerScale driver is powerscale, then run
- ```
- # Replace driver-namespace with the namespace where the PowerScale driver is installed
- $ kubectl edit csm/powerscale -n
- ```
- and modify the installation
-* Modify the API object in-place via `kubectl patch`
-
-#### Supported modifications
-* Changing environment variable values for driver
-* Updating the image of the driver
-
-### Uninstall CSI Driver
-The CSI Drivers and CSM Modules can be uninstalled by deleting the Custom Resource.
-
-For e.g.
-```
-$ kubectl delete csm/powerscale -n
-```
-
-By default, the `forceRemoveDriver` option is set to `true` which will uninstall the CSI Driver and CSM Modules when the Custom Resource is deleted. Setting this option to `false` is not recommended.
-
-### SideCars
-Although the sidecars field in the driver specification is optional, it is **strongly** recommended to not modify any details related to sidecars provided (if present) in the sample manifests. The only exception to this is modifications requested by the documentation, for example, filling in blank IPs or other such system-specific data. Any modifications not specifically requested by the documentation should be only done after consulting with Dell support.
-
-## Modules
-The CSM Operator can optionally enable modules that are supported by the specific Dell CSI driver. By default, the modules are disabled but they can be enabled by setting the `enabled` flag to true and setting any other configuration options for the given module.
diff --git a/content/v1/deployment/csmoperator/drivers/_index.md b/content/v1/deployment/csmoperator/drivers/_index.md
index c850691c0d..18129d5071 100644
--- a/content/v1/deployment/csmoperator/drivers/_index.md
+++ b/content/v1/deployment/csmoperator/drivers/_index.md
@@ -4,3 +4,92 @@ linkTitle: "CSI Drivers"
description: Installation of Dell CSI Drivers using Dell CSM Operator
weight: 1
---
+
+## Pre-requisites for installation of the CSI Drivers
+
+On Upstream Kubernetes clusters, ensure that to install
+* VolumeSnapshot CRDs - Install v1 VolumeSnapshot CRDs
+* External Volume Snapshot Controller
+
+### Volume Snapshot CRD's
+The Kubernetes Volume Snapshot CRDs can be obtained and installed from the external-snapshotter project on Github. Manifests are available [here](https://github.com/kubernetes-csi/external-snapshotter/tree/v5.0.1/client/config/crd)
+
+### Volume Snapshot Controller
+The CSI external-snapshotter sidecar is split into two controllers:
+- A common snapshot controller
+- A CSI external-snapshotter sidecar
+
+The common snapshot controller must be installed only once in the cluster irrespective of the number of CSI drivers installed in the cluster. On OpenShift clusters, the common snapshot-controller is pre-installed. In the clusters where it is not present, it can be installed using `kubectl` and the manifests are available [here](https://github.com/kubernetes-csi/external-snapshotter/tree/v5.0.1/deploy/kubernetes/snapshot-controller)
+
+*NOTE:*
+- The manifests available on GitHub install the snapshotter image:
+ - [quay.io/k8scsi/csi-snapshotter:v5.0.1](https://quay.io/repository/k8scsi/csi-snapshotter?tag=v5.0.1&tab=tags)
+- The CSI external-snapshotter sidecar is still installed along with the driver and does not involve any extra configuration.
+
+### Installation example
+
+You can install CRDs and the default snapshot controller by running the following commands:
+```bash
+git clone https://github.com/kubernetes-csi/external-snapshotter/
+cd ./external-snapshotter
+git checkout release-
+kubectl create -f client/config/crd
+kubectl create -f deploy/kubernetes/snapshot-controller
+```
+*NOTE:*
+- It is recommended to use 5.0.x version of snapshotter/snapshot-controller.
+
+## Installing CSI Driver via Operator
+
+Refer [PowerScale Driver](../drivers/powerscale) to install the driver via Operator
+
+>**Note**: If you are using an OLM based installation, example manifests are available in `OperatorHub` UI.
+You can edit these manifests and install the driver using the `OperatorHub` UI.
+
+### Verifying the driver installation
+Once the driver `Custom Resource (CR)` is created, you can verify the installation as mentioned below
+
+* Check if ContainerStorageModule CR is created successfully using the command below:
+ ```
+ $ kubectl get csm/ -n -o yaml
+ ```
+* Check the status of the CR to verify if the driver installation is in the `Succeeded` state. If the status is not `Succeeded`, see the [Troubleshooting guide](../troubleshooting/#my-dell-csi-driver-install-failed-how-do-i-fix-it) for more information.
+
+
+### Update CSI Drivers
+The CSI Drivers and CSM Modules installed by the Dell CSM Operator can be updated like any Kubernetes resource. This can be achieved in various ways which include:
+
+* Modifying the installation directly via `kubectl edit`
+ For example - If the name of the installed PowerScale driver is powerscale, then run
+ ```
+ # Replace driver-namespace with the namespace where the PowerScale driver is installed
+ $ kubectl edit csm/powerscale -n
+ ```
+ and modify the installation
+* Modify the API object in-place via `kubectl patch`
+
+#### Supported modifications
+* Changing environment variable values for driver
+* Updating the image of the driver
+* Upgrading the driver version
+
+**NOTES:**
+1. If you are trying to upgrade the CSI driver from an older version, make sure to modify the _configVersion_ field if required.
+ ```yaml
+ driver:
+ configVersion: v2.3.0
+ ```
+2. Do not try to update the operator by modifying the original `CustomResource` manifest file and running the `kubectl apply -f` command. As part of the driver installation, the Operator sets some annotations on the `CustomResource` object which are further utilized in some workflows (like detecting upgrade of drivers). If you run the `kubectl apply -f` command to update the driver, these annotations are overwritten and this may lead to failures.
+
+### Uninstall CSI Driver
+The CSI Drivers and CSM Modules can be uninstalled by deleting the Custom Resource.
+
+For e.g.
+```
+$ kubectl delete csm/powerscale -n
+```
+
+By default, the `forceRemoveDriver` option is set to `true` which will uninstall the CSI Driver and CSM Modules when the Custom Resource is deleted. Setting this option to `false` is not recommended.
+
+### SideCars
+Although the sidecars field in the driver specification is optional, it is **strongly** recommended to not modify any details related to sidecars provided (if present) in the sample manifests. The only exception to this is modifications requested by the documentation, for example, filling in blank IPs or other such system-specific data. Any modifications not specifically requested by the documentation should be only done after consulting with Dell support.
diff --git a/content/v1/deployment/csmoperator/drivers/powerscale.md b/content/v1/deployment/csmoperator/drivers/powerscale.md
index 4471f1d1e6..261e0c1222 100644
--- a/content/v1/deployment/csmoperator/drivers/powerscale.md
+++ b/content/v1/deployment/csmoperator/drivers/powerscale.md
@@ -137,7 +137,7 @@ User can query for all Dell CSI drivers using the following command:
```kubectl create -f ``` .
This command will deploy the CSI-PowerScale driver in the namespace specified in the input YAML file.
-5. [Verify the CSI Driver installation](../../#verifying-the-driver-installation)
+7. [Verify the CSI Driver installation](../drivers/_index.md#verifying-the-driver-installation)
**Note** :
1. "Kubelet config dir path" is not yet configurable in case of Operator based driver installation.
diff --git a/content/v1/deployment/csmoperator/modules/_index.md b/content/v1/deployment/csmoperator/modules/_index.md
index 4a76e7d868..1ac79f9d15 100644
--- a/content/v1/deployment/csmoperator/modules/_index.md
+++ b/content/v1/deployment/csmoperator/modules/_index.md
@@ -10,4 +10,4 @@ The steps include:
1. Deploy the Dell CSM Operator (if it is not already deployed). Please follow the instructions available [here](../../#installation).
2. Configure any pre-requisite for the desired module(s). See the specific module below for more information
-3. Follow the instructions available [here](../drivers/powerscale.md/#install-driver)) to install the Dell CSI Driver via the CSM Operator. The module section in the ContainerStorageModule CR should be updated to enable the desired module(s). There are [sample manifests](https://github.com/dell/csm-operator/tree/main/samples) provided which can be edited to do an easy installation of the driver along with the module.
\ No newline at end of file
+3. Follow the instructions available [here](../drivers/powerscale.md/#install-driver)) to install the Dell CSI Driver via the CSM Operator. The module section in the ContainerStorageModule CR should be updated to enable the desired module(s). There are [sample manifests](https://github.com/dell/csm-operator/tree/main/samples) provided which can be edited to do an easy installation of the driver along with the module.
diff --git a/content/v1/deployment/swagger.yaml b/content/v1/deployment/swagger.yaml
deleted file mode 100644
index 15a9b8b227..0000000000
--- a/content/v1/deployment/swagger.yaml
+++ /dev/null
@@ -1,1395 +0,0 @@
-basePath: /api/v1
-definitions:
- ApplicationCreateRequest:
- properties:
- cluster_id:
- type: string
- driver_configuration:
- items:
- type: string
- type: array
- driver_type_id:
- type: string
- module_configuration:
- items:
- type: string
- type: array
- module_types:
- items:
- type: string
- type: array
- name:
- type: string
- storage_arrays:
- items:
- type: string
- type: array
- required:
- - cluster_id
- - driver_type_id
- - name
- type: object
- ApplicationResponse:
- properties:
- application_output:
- type: string
- cluster_id:
- type: string
- driver_configuration:
- items:
- type: string
- type: array
- driver_type_id:
- type: string
- id:
- type: string
- module_configuration:
- items:
- type: string
- type: array
- module_types:
- items:
- type: string
- type: array
- name:
- type: string
- storage_arrays:
- items:
- type: string
- type: array
- type: object
- ClusterResponse:
- properties:
- cluster_id:
- type: string
- cluster_name:
- type: string
- nodes:
- description: The nodes
- type: string
- type: object
- ConfigFileResponse:
- properties:
- id:
- type: string
- name:
- type: string
- type: object
- DriverResponse:
- properties:
- id:
- type: string
- storage_array_type_id:
- type: string
- version:
- type: string
- type: object
- ErrorMessage:
- properties:
- arguments:
- items:
- type: string
- type: array
- code:
- description: HTTPStatusEnum Possible HTTP status values of completed or failed
- jobs
- enum:
- - 200
- - 201
- - 202
- - 204
- - 400
- - 401
- - 403
- - 404
- - 422
- - 429
- - 500
- - 503
- type: integer
- message:
- description: Message string.
- type: string
- message_l10n:
- description: Localized message
- type: object
- severity:
- description: |-
- SeverityEnum - The severity of the condition
- * INFO - Information that may be of use in understanding the failure. It is not a problem to fix.
- * WARNING - A condition that isn't a failure, but may be unexpected or a contributing factor. It may be necessary to fix the condition to successfully retry the request.
- * ERROR - An actual failure condition through which the request could not continue.
- * CRITICAL - A failure with significant impact to the system. Normally failed commands roll back and are just ERROR, but this is possible
- enum:
- - INFO
- - WARNING
- - ERROR
- - CRITICAL
- type: string
- type: object
- ErrorResponse:
- properties:
- http_status_code:
- description: HTTPStatusEnum Possible HTTP status values of completed or failed
- jobs
- enum:
- - 200
- - 201
- - 202
- - 204
- - 400
- - 401
- - 403
- - 404
- - 422
- - 429
- - 500
- - 503
- type: integer
- messages:
- description: |-
- A list of messages describing the failure encountered by this request. At least one will
- be of Error severity because Info and Warning conditions do not cause the request to fail
- items:
- $ref: '#/definitions/ErrorMessage'
- type: array
- type: object
- ModuleResponse:
- properties:
- id:
- type: string
- name:
- type: string
- standalone:
- type: boolean
- version:
- type: string
- type: object
- StorageArrayCreateRequest:
- properties:
- management_endpoint:
- type: string
- meta_data:
- items:
- type: string
- type: array
- password:
- type: string
- storage_array_type:
- type: string
- unique_id:
- type: string
- username:
- type: string
- required:
- - management_endpoint
- - password
- - storage_array_type
- - unique_id
- - username
- type: object
- StorageArrayResponse:
- properties:
- id:
- type: string
- management_endpoint:
- type: string
- meta_data:
- items:
- type: string
- type: array
- storage_array_type_id:
- type: string
- unique_id:
- type: string
- username:
- type: string
- type: object
- StorageArrayTypeResponse:
- properties:
- id:
- type: string
- name:
- type: string
- type: object
- StorageArrayUpdateRequest:
- properties:
- management_endpoint:
- type: string
- meta_data:
- items:
- type: string
- type: array
- password:
- type: string
- storage_array_type:
- type: string
- unique_id:
- type: string
- username:
- type: string
- type: object
- TaskResponse:
- properties:
- _links:
- additionalProperties:
- additionalProperties:
- type: string
- type: object
- type: object
- application_name:
- type: string
- id:
- type: string
- logs:
- type: string
- status:
- type: string
- type: object
-info:
- contact: {}
- description: CSM Deployment API
- title: CSM Deployment API
- version: "1.0"
-paths:
- /applications:
- get:
- consumes:
- - application/json
- description: List all applications
- operationId: list-applications
- parameters:
- - description: Application Name
- in: query
- name: name
- type: string
- produces:
- - application/json
- responses:
- "200":
- description: OK
- schema:
- items:
- $ref: '#/definitions/ApplicationResponse'
- type: array
- "400":
- description: Bad Request
- schema:
- $ref: '#/definitions/ErrorResponse'
- "404":
- description: Not Found
- schema:
- $ref: '#/definitions/ErrorResponse'
- "500":
- description: Internal Server Error
- schema:
- $ref: '#/definitions/ErrorResponse'
- security:
- - ApiKeyAuth: []
- summary: List all applications
- tags:
- - application
- post:
- consumes:
- - application/json
- description: Create a new application
- operationId: create-application
- parameters:
- - description: Application info for creation
- in: body
- name: application
- required: true
- schema:
- $ref: '#/definitions/ApplicationCreateRequest'
- produces:
- - application/json
- responses:
- "202":
- description: Accepted
- schema:
- type: string
- "400":
- description: Bad Request
- schema:
- $ref: '#/definitions/ErrorResponse'
- "404":
- description: Not Found
- schema:
- $ref: '#/definitions/ErrorResponse'
- "500":
- description: Internal Server Error
- schema:
- $ref: '#/definitions/ErrorResponse'
- security:
- - ApiKeyAuth: []
- summary: Create a new application
- tags:
- - application
- /applications/{id}:
- delete:
- consumes:
- - application/json
- description: Delete an application
- operationId: delete-application
- parameters:
- - description: Application ID
- in: path
- name: id
- required: true
- type: string
- produces:
- - application/json
- responses:
- "204":
- description: ""
- "400":
- description: Bad Request
- schema:
- $ref: '#/definitions/ErrorResponse'
- "404":
- description: Not Found
- schema:
- $ref: '#/definitions/ErrorResponse'
- "500":
- description: Internal Server Error
- schema:
- $ref: '#/definitions/ErrorResponse'
- security:
- - ApiKeyAuth: []
- summary: Delete an application
- tags:
- - application
- get:
- consumes:
- - application/json
- description: Get an application
- operationId: get-application
- parameters:
- - description: Application ID
- in: path
- name: id
- required: true
- type: string
- produces:
- - application/json
- responses:
- "200":
- description: OK
- schema:
- $ref: '#/definitions/ApplicationResponse'
- "400":
- description: Bad Request
- schema:
- $ref: '#/definitions/ErrorResponse'
- "404":
- description: Not Found
- schema:
- $ref: '#/definitions/ErrorResponse'
- "500":
- description: Internal Server Error
- schema:
- $ref: '#/definitions/ErrorResponse'
- security:
- - ApiKeyAuth: []
- summary: Get an application
- tags:
- - application
- /clusters:
- get:
- consumes:
- - application/json
- description: List all clusters
- operationId: list-clusters
- parameters:
- - description: Cluster Name
- in: query
- name: cluster_name
- type: string
- produces:
- - application/json
- responses:
- "200":
- description: OK
- schema:
- items:
- $ref: '#/definitions/ClusterResponse'
- type: array
- "400":
- description: Bad Request
- schema:
- $ref: '#/definitions/ErrorResponse'
- "404":
- description: Not Found
- schema:
- $ref: '#/definitions/ErrorResponse'
- "500":
- description: Internal Server Error
- schema:
- $ref: '#/definitions/ErrorResponse'
- security:
- - ApiKeyAuth: []
- summary: List all clusters
- tags:
- - cluster
- post:
- consumes:
- - application/json
- description: Create a new cluster
- operationId: create-cluster
- parameters:
- - description: Name of the cluster
- in: formData
- name: name
- required: true
- type: string
- - description: kube config file
- in: formData
- name: file
- required: true
- type: file
- produces:
- - application/json
- responses:
- "201":
- description: Created
- schema:
- $ref: '#/definitions/ClusterResponse'
- "400":
- description: Bad Request
- schema:
- $ref: '#/definitions/ErrorResponse'
- "404":
- description: Not Found
- schema:
- $ref: '#/definitions/ErrorResponse'
- "500":
- description: Internal Server Error
- schema:
- $ref: '#/definitions/ErrorResponse'
- security:
- - ApiKeyAuth: []
- summary: Create a new cluster
- tags:
- - cluster
- /clusters/{id}:
- delete:
- consumes:
- - application/json
- description: Delete a cluster
- operationId: delete-cluster
- parameters:
- - description: Cluster ID
- in: path
- name: id
- required: true
- type: string
- produces:
- - application/json
- responses:
- "204":
- description: ""
- "400":
- description: Bad Request
- schema:
- $ref: '#/definitions/ErrorResponse'
- "404":
- description: Not Found
- schema:
- $ref: '#/definitions/ErrorResponse'
- "500":
- description: Internal Server Error
- schema:
- $ref: '#/definitions/ErrorResponse'
- security:
- - ApiKeyAuth: []
- summary: Delete a cluster
- tags:
- - cluster
- get:
- consumes:
- - application/json
- description: Get a cluster
- operationId: get-cluster
- parameters:
- - description: Cluster ID
- in: path
- name: id
- required: true
- type: string
- produces:
- - application/json
- responses:
- "200":
- description: OK
- schema:
- $ref: '#/definitions/ClusterResponse'
- "400":
- description: Bad Request
- schema:
- $ref: '#/definitions/ErrorResponse'
- "404":
- description: Not Found
- schema:
- $ref: '#/definitions/ErrorResponse'
- "500":
- description: Internal Server Error
- schema:
- $ref: '#/definitions/ErrorResponse'
- security:
- - ApiKeyAuth: []
- summary: Get a cluster
- tags:
- - cluster
- patch:
- consumes:
- - application/json
- description: Update a cluster
- operationId: update-cluster
- parameters:
- - description: Cluster ID
- in: path
- name: id
- required: true
- type: string
- - description: Name of the cluster
- in: formData
- name: name
- type: string
- - description: kube config file
- in: formData
- name: file
- type: file
- produces:
- - application/json
- responses:
- "200":
- description: OK
- schema:
- $ref: '#/definitions/ClusterResponse'
- "400":
- description: Bad Request
- schema:
- $ref: '#/definitions/ErrorResponse'
- "404":
- description: Not Found
- schema:
- $ref: '#/definitions/ErrorResponse'
- "500":
- description: Internal Server Error
- schema:
- $ref: '#/definitions/ErrorResponse'
- security:
- - ApiKeyAuth: []
- summary: Update a cluster
- tags:
- - cluster
- /configuration-files:
- get:
- consumes:
- - application/json
- description: List all configuration files
- operationId: list-config-file
- parameters:
- - description: Name of the configuration file
- in: query
- name: config_name
- type: string
- produces:
- - application/json
- responses:
- "200":
- description: OK
- schema:
- items:
- $ref: '#/definitions/ConfigFileResponse'
- type: array
- "400":
- description: Bad Request
- schema:
- $ref: '#/definitions/ErrorResponse'
- "404":
- description: Not Found
- schema:
- $ref: '#/definitions/ErrorResponse'
- "500":
- description: Internal Server Error
- schema:
- $ref: '#/definitions/ErrorResponse'
- security:
- - ApiKeyAuth: []
- summary: List all configuration files
- tags:
- - configuration-file
- post:
- consumes:
- - application/json
- description: Create a new configuration file
- operationId: create-config-file
- parameters:
- - description: Name of the configuration file
- in: formData
- name: name
- required: true
- type: string
- - description: Configuration file
- in: formData
- name: file
- required: true
- type: file
- produces:
- - application/json
- responses:
- "201":
- description: Created
- schema:
- $ref: '#/definitions/ConfigFileResponse'
- "400":
- description: Bad Request
- schema:
- $ref: '#/definitions/ErrorResponse'
- "404":
- description: Not Found
- schema:
- $ref: '#/definitions/ErrorResponse'
- "500":
- description: Internal Server Error
- schema:
- $ref: '#/definitions/ErrorResponse'
- security:
- - ApiKeyAuth: []
- summary: Create a new configuration file
- tags:
- - configuration-file
- /configuration-files/{id}:
- delete:
- consumes:
- - application/json
- description: Delete a configuration file
- operationId: delete-config-file
- parameters:
- - description: Configuration file ID
- in: path
- name: id
- required: true
- type: string
- produces:
- - application/json
- responses:
- "204":
- description: ""
- "400":
- description: Bad Request
- schema:
- $ref: '#/definitions/ErrorResponse'
- "404":
- description: Not Found
- schema:
- $ref: '#/definitions/ErrorResponse'
- "500":
- description: Internal Server Error
- schema:
- $ref: '#/definitions/ErrorResponse'
- security:
- - ApiKeyAuth: []
- summary: Delete a configuration file
- tags:
- - configuration-file
- get:
- consumes:
- - application/json
- description: Get a configuration file
- operationId: get-config-file
- parameters:
- - description: Configuration file ID
- in: path
- name: id
- required: true
- type: string
- produces:
- - application/json
- responses:
- "200":
- description: OK
- schema:
- $ref: '#/definitions/ConfigFileResponse'
- "400":
- description: Bad Request
- schema:
- $ref: '#/definitions/ErrorResponse'
- "404":
- description: Not Found
- schema:
- $ref: '#/definitions/ErrorResponse'
- "500":
- description: Internal Server Error
- schema:
- $ref: '#/definitions/ErrorResponse'
- security:
- - ApiKeyAuth: []
- summary: Get a configuration file
- tags:
- - configuration-file
- patch:
- consumes:
- - application/json
- description: Update a configuration file
- operationId: update-config-file
- parameters:
- - description: Configuration file ID
- in: path
- name: id
- required: true
- type: string
- - description: Name of the configuration file
- in: formData
- name: name
- required: true
- type: string
- - description: Configuration file
- in: formData
- name: file
- required: true
- type: file
- produces:
- - application/json
- responses:
- "204":
- description: No Content
- "400":
- description: Bad Request
- schema:
- $ref: '#/definitions/ErrorResponse'
- "404":
- description: Not Found
- schema:
- $ref: '#/definitions/ErrorResponse'
- "500":
- description: Internal Server Error
- schema:
- $ref: '#/definitions/ErrorResponse'
- security:
- - ApiKeyAuth: []
- summary: Update a configuration file
- tags:
- - configuration-file
- /driver-types:
- get:
- consumes:
- - application/json
- description: List all driver types
- operationId: list-driver-types
- produces:
- - application/json
- responses:
- "200":
- description: OK
- schema:
- items:
- $ref: '#/definitions/DriverResponse'
- type: array
- "400":
- description: Bad Request
- schema:
- $ref: '#/definitions/ErrorResponse'
- "404":
- description: Not Found
- schema:
- $ref: '#/definitions/ErrorResponse'
- "500":
- description: Internal Server Error
- schema:
- $ref: '#/definitions/ErrorResponse'
- security:
- - ApiKeyAuth: []
- summary: List all driver types
- tags:
- - driver-type
- /driver-types/{id}:
- get:
- consumes:
- - application/json
- description: Get a driver type
- operationId: get-driver-type
- parameters:
- - description: Driver Type ID
- in: path
- name: id
- required: true
- type: string
- produces:
- - application/json
- responses:
- "200":
- description: OK
- schema:
- $ref: '#/definitions/DriverResponse'
- "400":
- description: Bad Request
- schema:
- $ref: '#/definitions/ErrorResponse'
- "404":
- description: Not Found
- schema:
- $ref: '#/definitions/ErrorResponse'
- "500":
- description: Internal Server Error
- schema:
- $ref: '#/definitions/ErrorResponse'
- security:
- - ApiKeyAuth: []
- summary: Get a driver type
- tags:
- - driver-type
- /module-types:
- get:
- consumes:
- - application/json
- description: List all module types
- operationId: list-module-type
- produces:
- - application/json
- responses:
- "200":
- description: OK
- schema:
- items:
- $ref: '#/definitions/ModuleResponse'
- type: array
- "400":
- description: Bad Request
- schema:
- $ref: '#/definitions/ErrorResponse'
- "404":
- description: Not Found
- schema:
- $ref: '#/definitions/ErrorResponse'
- "500":
- description: Internal Server Error
- schema:
- $ref: '#/definitions/ErrorResponse'
- security:
- - ApiKeyAuth: []
- summary: List all module types
- tags:
- - module-type
- /module-types/{id}:
- get:
- consumes:
- - application/json
- description: Get a module type
- operationId: get-module-type
- parameters:
- - description: Module Type ID
- in: path
- name: id
- required: true
- type: string
- produces:
- - application/json
- responses:
- "200":
- description: OK
- schema:
- $ref: '#/definitions/ModuleResponse'
- "400":
- description: Bad Request
- schema:
- $ref: '#/definitions/ErrorResponse'
- "404":
- description: Not Found
- schema:
- $ref: '#/definitions/ErrorResponse'
- "500":
- description: Internal Server Error
- schema:
- $ref: '#/definitions/ErrorResponse'
- security:
- - ApiKeyAuth: []
- summary: Get a module type
- tags:
- - module-type
- /storage-array-types:
- get:
- consumes:
- - application/json
- description: List all storage array types
- operationId: list-storage-array-type
- produces:
- - application/json
- responses:
- "200":
- description: OK
- schema:
- items:
- $ref: '#/definitions/StorageArrayTypeResponse'
- type: array
- "400":
- description: Bad Request
- schema:
- $ref: '#/definitions/ErrorResponse'
- "404":
- description: Not Found
- schema:
- $ref: '#/definitions/ErrorResponse'
- "500":
- description: Internal Server Error
- schema:
- $ref: '#/definitions/ErrorResponse'
- security:
- - ApiKeyAuth: []
- summary: List all storage array types
- tags:
- - storage-array-type
- /storage-array-types/{id}:
- get:
- consumes:
- - application/json
- description: Get a storage array type
- operationId: get-storage-array-type
- parameters:
- - description: Storage Array Type ID
- in: path
- name: id
- required: true
- type: string
- produces:
- - application/json
- responses:
- "200":
- description: OK
- schema:
- $ref: '#/definitions/StorageArrayTypeResponse'
- "400":
- description: Bad Request
- schema:
- $ref: '#/definitions/ErrorResponse'
- "404":
- description: Not Found
- schema:
- $ref: '#/definitions/ErrorResponse'
- "500":
- description: Internal Server Error
- schema:
- $ref: '#/definitions/ErrorResponse'
- security:
- - ApiKeyAuth: []
- summary: Get a storage array type
- tags:
- - storage-array-type
- /storage-arrays:
- get:
- consumes:
- - application/json
- description: List all storage arrays
- operationId: list-storage-arrays
- parameters:
- - description: Unique ID
- in: query
- name: unique_id
- type: string
- - description: Storage Type
- in: query
- name: storage_type
- type: string
- produces:
- - application/json
- responses:
- "202":
- description: Accepted
- schema:
- items:
- $ref: '#/definitions/StorageArrayResponse'
- type: array
- "400":
- description: Bad Request
- schema:
- $ref: '#/definitions/ErrorResponse'
- "404":
- description: Not Found
- schema:
- $ref: '#/definitions/ErrorResponse'
- "500":
- description: Internal Server Error
- schema:
- $ref: '#/definitions/ErrorResponse'
- security:
- - ApiKeyAuth: []
- summary: List all storage arrays
- tags:
- - storage-array
- post:
- consumes:
- - application/json
- description: Create a new storage array
- operationId: create-storage-array
- parameters:
- - description: Storage Array info for creation
- in: body
- name: storageArray
- required: true
- schema:
- $ref: '#/definitions/StorageArrayCreateRequest'
- produces:
- - application/json
- responses:
- "201":
- description: Created
- schema:
- $ref: '#/definitions/StorageArrayResponse'
- "400":
- description: Bad Request
- schema:
- $ref: '#/definitions/ErrorResponse'
- "404":
- description: Not Found
- schema:
- $ref: '#/definitions/ErrorResponse'
- "500":
- description: Internal Server Error
- schema:
- $ref: '#/definitions/ErrorResponse'
- security:
- - ApiKeyAuth: []
- summary: Create a new storage array
- tags:
- - storage-array
- /storage-arrays/{id}:
- delete:
- consumes:
- - application/json
- description: Delete storage array
- operationId: delete-storage-array
- parameters:
- - description: Storage Array ID
- in: path
- name: id
- required: true
- type: string
- produces:
- - application/json
- responses:
- "200":
- description: Success
- schema:
- type: string
- "400":
- description: Bad Request
- schema:
- $ref: '#/definitions/ErrorResponse'
- "404":
- description: Not Found
- schema:
- $ref: '#/definitions/ErrorResponse'
- "500":
- description: Internal Server Error
- schema:
- $ref: '#/definitions/ErrorResponse'
- security:
- - ApiKeyAuth: []
- summary: Delete storage array
- tags:
- - storage-array
- get:
- consumes:
- - application/json
- description: Get storage array
- operationId: get-storage-array
- parameters:
- - description: Storage Array ID
- in: path
- name: id
- required: true
- type: string
- produces:
- - application/json
- responses:
- "200":
- description: OK
- schema:
- $ref: '#/definitions/StorageArrayResponse'
- "400":
- description: Bad Request
- schema:
- $ref: '#/definitions/ErrorResponse'
- "404":
- description: Not Found
- schema:
- $ref: '#/definitions/ErrorResponse'
- "500":
- description: Internal Server Error
- schema:
- $ref: '#/definitions/ErrorResponse'
- security:
- - ApiKeyAuth: []
- summary: Get storage array
- tags:
- - storage-array
- patch:
- consumes:
- - application/json
- description: Update a storage array
- operationId: update-storage-array
- parameters:
- - description: Storage Array ID
- in: path
- name: id
- required: true
- type: string
- - description: Storage Array info for update
- in: body
- name: storageArray
- required: true
- schema:
- $ref: '#/definitions/StorageArrayUpdateRequest'
- produces:
- - application/json
- responses:
- "204":
- description: No Content
- "400":
- description: Bad Request
- schema:
- $ref: '#/definitions/ErrorResponse'
- "404":
- description: Not Found
- schema:
- $ref: '#/definitions/ErrorResponse'
- "500":
- description: Internal Server Error
- schema:
- $ref: '#/definitions/ErrorResponse'
- security:
- - ApiKeyAuth: []
- summary: Update a storage array
- tags:
- - storage-array
- /tasks:
- get:
- consumes:
- - application/json
- description: List all tasks
- operationId: list-tasks
- parameters:
- - description: Application Name
- in: query
- name: application_name
- type: string
- produces:
- - application/json
- responses:
- "200":
- description: OK
- schema:
- items:
- $ref: '#/definitions/TaskResponse'
- type: array
- "400":
- description: Bad Request
- schema:
- $ref: '#/definitions/ErrorResponse'
- "404":
- description: Not Found
- schema:
- $ref: '#/definitions/ErrorResponse'
- "500":
- description: Internal Server Error
- schema:
- $ref: '#/definitions/ErrorResponse'
- security:
- - ApiKeyAuth: []
- summary: List all tasks
- tags:
- - task
- /tasks/{id}:
- get:
- consumes:
- - application/json
- description: Get a task
- operationId: get-task
- parameters:
- - description: Task ID
- in: path
- name: id
- required: true
- type: string
- produces:
- - application/json
- responses:
- "200":
- description: OK
- schema:
- $ref: '#/definitions/TaskResponse'
- "303":
- description: See Other
- schema:
- $ref: '#/definitions/TaskResponse'
- "400":
- description: Bad Request
- schema:
- $ref: '#/definitions/ErrorResponse'
- "404":
- description: Not Found
- schema:
- $ref: '#/definitions/ErrorResponse'
- "500":
- description: Internal Server Error
- schema:
- $ref: '#/definitions/ErrorResponse'
- security:
- - ApiKeyAuth: []
- summary: Get a task
- tags:
- - task
- /tasks/{id}/approve:
- post:
- consumes:
- - application/json
- description: Approve state change for an application
- operationId: approve-state-change-application
- parameters:
- - description: Task ID
- in: path
- name: id
- required: true
- type: string
- - description: Task is associated with an Application update operation
- in: query
- name: updating
- type: boolean
- produces:
- - application/json
- responses:
- "202":
- description: Accepted
- schema:
- type: string
- "400":
- description: Bad Request
- schema:
- $ref: '#/definitions/ErrorResponse'
- "404":
- description: Not Found
- schema:
- $ref: '#/definitions/ErrorResponse'
- "500":
- description: Internal Server Error
- schema:
- $ref: '#/definitions/ErrorResponse'
- security:
- - ApiKeyAuth: []
- summary: Approve state change for an application
- tags:
- - task
- /tasks/{id}/cancel:
- post:
- consumes:
- - application/json
- description: Cancel state change for an application
- operationId: cancel-state-change-application
- parameters:
- - description: Task ID
- in: path
- name: id
- required: true
- type: string
- - description: Task is associated with an Application update operation
- in: query
- name: updating
- type: boolean
- produces:
- - application/json
- responses:
- "200":
- description: Success
- schema:
- type: string
- "400":
- description: Bad Request
- schema:
- $ref: '#/definitions/ErrorResponse'
- "404":
- description: Not Found
- schema:
- $ref: '#/definitions/ErrorResponse'
- "500":
- description: Internal Server Error
- schema:
- $ref: '#/definitions/ErrorResponse'
- security:
- - ApiKeyAuth: []
- summary: Cancel state change for an application
- tags:
- - task
- /users/change-password:
- patch:
- consumes:
- - application/json
- description: Change password for existing user
- operationId: change-password
- parameters:
- - description: Enter New Password
- format: password
- in: query
- name: password
- required: true
- type: string
- produces:
- - application/json
- responses:
- "204":
- description: No Content
- "401":
- description: Unauthorized
- schema:
- $ref: '#/definitions/ErrorResponse'
- "403":
- description: Forbidden
- schema:
- $ref: '#/definitions/ErrorResponse'
- "500":
- description: Internal Server Error
- schema:
- $ref: '#/definitions/ErrorResponse'
- security:
- - BasicAuth: []
- summary: Change password for existing user
- tags:
- - user
- /users/login:
- post:
- consumes:
- - application/json
- description: Login for existing user
- operationId: login
- produces:
- - application/json
- responses:
- "200":
- description: Bearer Token for Logged in User
- schema:
- type: string
- "401":
- description: Unauthorized
- schema:
- $ref: '#/definitions/ErrorResponse'
- "403":
- description: Forbidden
- schema:
- $ref: '#/definitions/ErrorResponse'
- "500":
- description: Internal Server Error
- schema:
- $ref: '#/definitions/ErrorResponse'
- security:
- - BasicAuth: []
- summary: Login for existing user
- tags:
- - user
-securityDefinitions:
- ApiKeyAuth:
- in: header
- name: Authorization
- type: apiKey
- BasicAuth:
- type: basic
-swagger: "2.0"
diff --git a/content/v1/deployment/troubleshooting.md b/content/v1/deployment/troubleshooting.md
deleted file mode 100644
index 60149d0e44..0000000000
--- a/content/v1/deployment/troubleshooting.md
+++ /dev/null
@@ -1,41 +0,0 @@
----
-title: "Troubleshooting"
-linkTitle: "Troubleshooting"
-weight: 4
-Description: >
- Troubleshooting guide
----
-
-## Frequently Asked Questions
-1. [Why does the installation fail due to an invalid cipherKey value?](#why-does-the-installation-fail-due-to-an-invalid-cipherkey-value)
-2. [Why does the cluster-init pod show the error "cluster has already been initialized"?](#why-does-the-cluster-init-pod-show-the-error-cluster-has-already-been-initialized)
-3. [Why does the precheck fail when creating an application?](#why-does-the-precheck-fail-when-creating-an-application)
-4. [How can I view detailed logs for the CSM Installer?](#how-can-i-view-detailed-logs-for-the-csm-installer)
-5. [After deleting an application, why can't I re-create the same application?](#after-deleting-an-application-why-cant-i-re-create-the-same-application)
-
-### Why does the installation fail due to an invalid cipherKey value?
-The `cipherKey` value used during deployment of the CSM Installer must be exactly 32 characters in length and contained within quotes.
-
-### Why does the cluster-init pod show the error "cluster has already been initialized"?
-During the initial start-up of the CSM Installer, the database will be initialized by the cluster-init job. If the CSM Installer is uninstalled and then re-installed on the same cluster, this error may be shown due to the Persistent Volume for the database already containing an initialized database. The CSM Installer will function as normal and the cluster-init job can be ignored.
-
-If a clean installation of the CSM Installer is required, the `dbVolumeDirectory` (default location `/var/lib/cockroachdb`) must be deleted from the worker node which is hosting the Persistent Volume. After this directory is deleted, the CSM Installer can be re-installed.
-
-Caution: Deleting the `dbVolumeDirectory` location will remove any data persisted by the CSM Installer including clusters, storage systems, and installed applications.
-
-### Why does the precheck fail when creating an application?
-Each CSI Driver and CSM Module has required software or CRDs that must be installed before the application can be deployed in the cluster. These prechecks are verified when the `csm create application` command is executed. If the error message "create application failed" is displayed, [review the CSM Installer logs](#how-can-i-view-detailed-logs-for-the-csm-installer) to view details about the failed prechecks.
-
-If the precheck fails due to required software (e.g. iSCSI, NFS, SDC) not installed on the cluster nodes, follow these steps to address the issue:
-1. Delete the cluster from the CSM Installer using the `csm delete cluster` command.
-2. Update the nodes in the cluster by installing required software.
-3. Add the cluster to the CSM Installer using the `csm add cluster` command.
-
-### How can I view detailed logs for the CSM Installer?
-Detailed logs of the CSM Installer can be displayed using the following command:
-```
-kubectl logs -f -n deploy/dell-csm-installer
-```
-
-### After deleting an application, why can't I re-create the same application?
-After deleting an application using the `csm delete application` command, the namespace and other non-application resources including Secrets are not deleted from the cluster. This is to prevent removing any resources that may not have been created by the CSM Installer. The namespace must be manually deleted before attempting to re-create the same application using the CSM Installer.
diff --git a/content/v1/observability/_index.md b/content/v1/observability/_index.md
index 6b3ff27be8..8f9f05fc63 100644
--- a/content/v1/observability/_index.md
+++ b/content/v1/observability/_index.md
@@ -29,7 +29,7 @@ CSM for Observability is composed of several services, each living in its own Gi
CSM for Observability provides the following capabilities:
{{}}
-| Capability | PowerMax | PowerFlex | Unity | PowerScale | PowerStore |
+| Capability | PowerMax | PowerFlex | Unity XT | PowerScale | PowerStore |
| - | :-: | :-: | :-: | :-: | :-: |
| Collect and expose Volume Metrics via the OpenTelemetry Collector | no | yes | no | no | yes |
| Collect and expose File System Metrics via the OpenTelemetry Collector | no | no | no | no | yes |
@@ -46,8 +46,8 @@ CSM for Observability provides the following capabilities:
{{}}
| COP/OS | Supported Versions |
|-|-|
-| Kubernetes | 1.21, 1.22, 1.23 |
-| Red Hat OpenShift | 4.8, 4.9 |
+| Kubernetes | 1.22, 1.23, 1.24 |
+| Red Hat OpenShift | 4.9, 4.10 |
| Rancher Kubernetes Engine | yes |
| RHEL | 7.x, 8.x |
| CentOS | 7.8, 7.9 |
@@ -67,8 +67,8 @@ CSM for Observability supports the following CSI drivers and versions.
{{}}
| Storage Array | CSI Driver | Supported Versions |
| ------------- | ---------- | ------------------ |
-| CSI Driver for Dell PowerFlex | [csi-powerflex](https://github.com/dell/csi-powerflex) | v2.0, v2.1, v2.2 |
-| CSI Driver for Dell PowerStore | [csi-powerstore](https://github.com/dell/csi-powerstore) | v2.0, v2.1, v2.2 |
+| CSI Driver for Dell PowerFlex | [csi-powerflex](https://github.com/dell/csi-powerflex) | v2.0 + |
+| CSI Driver for Dell PowerStore | [csi-powerstore](https://github.com/dell/csi-powerstore) | v2.0 + |
{{
}}
## Topology Data
@@ -79,7 +79,7 @@ CSM for Observability provides Kubernetes administrators with the topology data
| -------------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------- |
| Namespace | The namespace associated with the persistent volume claim |
| Persistent Volume | The name of the persistent volume |
-| Status | The status of the persistent volume. "Released" indicating the persistent volume has a claim. "Bound" indicating the persistent volume has a claim |
+| Status | The status of the persistent volume. "Released" indicates the persistent volume does not have a claim. "Bound" indicates the persistent volume has a claim |
| Persistent Volume Claim | The name of the persistent volume claim associated with the persistent volume |
| CSI Driver | The name of the CSI driver that was responsible for provisioning the volume on the storage system |
| Created | The date the persistent volume was created |
diff --git a/content/v1/observability/deployment/_index.md b/content/v1/observability/deployment/_index.md
index 9a5d6f2566..50efaa2c3f 100644
--- a/content/v1/observability/deployment/_index.md
+++ b/content/v1/observability/deployment/_index.md
@@ -30,7 +30,7 @@ The Prometheus service should be running on the same Kubernetes cluster as the C
| Supported Version | Image | Helm Chart |
| ----------------- | ----------------------- | ------------------------------------------------------------ |
-| 2.23.0 | prom/prometheus:v2.23.0 | [Prometheus Helm chart](https://github.com/prometheus-community/helm-charts/tree/main/charts/prometheus) |
+| 2.34.0 | prom/prometheus:v2.34.0 | [Prometheus Helm chart](https://github.com/prometheus-community/helm-charts/tree/main/charts/prometheus) |
**Note**: It is the user's responsibility to provide persistent storage for Prometheus if they want to preserve historical data.
@@ -57,7 +57,7 @@ Here is a sample minimal configuration for Prometheus. Please note that the conf
enabled: true
image:
repository: quay.io/prometheus/prometheus
- tag: v2.23.0
+ tag: v2.34.0
pullPolicy: IfNotPresent
persistentVolume:
enabled: false
@@ -119,7 +119,7 @@ The Grafana dashboards require Grafana to be deployed in the same Kubernetes clu
| Supported Version | Helm Chart |
| ----------------- | --------------------------------------------------------- |
-| 7.3.0-7.3.2 | [Grafana Helm chart](https://github.com/grafana/helm-charts/tree/main/charts/grafana) |
+| 8.5.0 | [Grafana Helm chart](https://github.com/grafana/helm-charts/tree/main/charts/grafana) |
Grafana must be configured with the following data sources/plugins:
@@ -191,7 +191,7 @@ Below are the steps to deploy a new Grafana instance into your Kubernetes cluste
# grafana-values.yaml
image:
repository: grafana/grafana
- tag: 7.3.0
+ tag: 8.5.0
sha: ""
pullPolicy: IfNotPresent
service:
@@ -242,11 +242,11 @@ Below are the steps to deploy a new Grafana instance into your Kubernetes cluste
## Additional grafana server CofigMap mounts
## Defines additional mounts with CofigMap. CofigMap must be manually created in the namespace.
extraConfigmapMounts: [] # If you created a ConfigMap on the previous step, delete [] and uncomment the lines below
- # - name: certs-configmap
- # mountPath: /etc/ssl/certs/ca-certificates.crt
- # subPath: ca-certificates.crt
- # configMap: certs-configmap
- # readOnly: true
+ # - name: certs-configmap
+ # mountPath: /etc/ssl/certs/ca-certificates.crt
+ # subPath: ca-certificates.crt
+ # configMap: certs-configmap
+ # readOnly: true
```
3. Add the Grafana Helm chart repository.
diff --git a/content/v1/observability/deployment/helm.md b/content/v1/observability/deployment/helm.md
index 6d76f8216f..02feb6186f 100644
--- a/content/v1/observability/deployment/helm.md
+++ b/content/v1/observability/deployment/helm.md
@@ -28,7 +28,7 @@ The Container Storage Modules (CSM) for Observability Helm chart bootstraps an O
`kubectl get secret vxflexos-config -n [CSI_DRIVER_NAMESPACE] -o yaml | sed 's/namespace: [CSI_DRIVER_NAMESPACE]/namespace: [CSM_NAMESPACE]/' | kubectl create -f -`
- If [CSM for Authorization is enabled](../../../authorization/deployment/#configuring-a-dell-emc-csi-driver-with-csm-for-authorization) for CSI PowerFlex, perform the following steps:
+ If [CSM for Authorization is enabled](../../../authorization/deployment/#configuring-a-dell-csi-driver-with-csm-for-authorization) for CSI PowerFlex, perform the following steps:
2. Copy the driver configuration parameters ConfigMap from the CSI PowerFlex namespace into the CSM for Observability namespace:
diff --git a/content/v1/observability/deployment/offline.md b/content/v1/observability/deployment/offline.md
index 076921deb0..b4c5ccd9d6 100644
--- a/content/v1/observability/deployment/offline.md
+++ b/content/v1/observability/deployment/offline.md
@@ -130,7 +130,7 @@ To perform an offline installation of a Helm chart, the following steps should b
[user@anothersystem /home/user/offline-karavi-observability-bundle/helm]# kubectl get secret vxflexos-config -n [CSI_DRIVER_NAMESPACE] -o yaml | sed 's/namespace: [CSI_DRIVER_NAMESPACE]/namespace: [CSM_NAMESPACE]/' | kubectl create -f -
```
- If [CSM for Authorization is enabled](../../../authorization/deployment/#configuring-a-dell-emc-csi-driver-with-csm-for-authorization) for CSI PowerFlex, perform the following steps:
+ If [CSM for Authorization is enabled](../../../authorization/deployment/#configuring-a-dell-csi-driver-with-csm-for-authorization) for CSI PowerFlex, perform these steps:
```
[user@anothersystem /home/user/offline-karavi-observability-bundle/helm]# kubectl get configmap vxflexos-config-params -n [CSI_DRIVER_NAMESPACE] -o yaml | sed 's/namespace: [CSI_DRIVER_NAMESPACE]/namespace: [CSM_NAMESPACE]/' | kubectl create -f -
diff --git a/content/v1/observability/release/_index.md b/content/v1/observability/release/_index.md
new file mode 100644
index 0000000000..84a9c87ea2
--- /dev/null
+++ b/content/v1/observability/release/_index.md
@@ -0,0 +1,19 @@
+---
+title: "Release notes"
+linkTitle: "Release notes"
+weight: 5
+Description: >
+ Dell Container Storage Modules (CSM) release notes for observability
+---
+
+## Release Notes - CSM Observability 1.2.0
+
+### New Features/Changes
+
+### Fixed Issues
+
+- [PowerStore Grafana dashboard does not populate correctly ](https://github.com/dell/csm/issues/279)
+- [Grafana installation script - prometheus address is incorrect](https://github.com/dell/csm/issues/278)
+- [prometheus-values.yaml error in json](https://github.com/dell/csm/issues/259)
+
+### Known Issues
\ No newline at end of file
diff --git a/content/v1/FAQ/_index.md b/content/v1/references/FAQ/_index.md
similarity index 99%
rename from content/v1/FAQ/_index.md
rename to content/v1/references/FAQ/_index.md
index 39ffd7d493..b1fc7aabe0 100644
--- a/content/v1/FAQ/_index.md
+++ b/content/v1/references/FAQ/_index.md
@@ -2,7 +2,7 @@
title: "CSM FAQ"
linktitle: "FAQ"
description: Frequently asked questions of Dell Technologies (Dell) Container Storage Modules
-weight: 2
+weight: 1
---
- [What are Dell Container Storage Modules (CSM)? How different is it from a CSI driver?](#what-are-dell-container-storage-modules-csm-how-different-is-it-from-a-csi-driver)
diff --git a/content/v1/references/_index.md b/content/v1/references/_index.md
new file mode 100644
index 0000000000..28cae60329
--- /dev/null
+++ b/content/v1/references/_index.md
@@ -0,0 +1,7 @@
+---
+title: "References"
+linkTitle: "References"
+weight: 13
+Description: >
+ Dell Technologies (Dell) Container Storage Modules (CSM) References
+---
diff --git a/content/v1/contributionguidelines/_index.md b/content/v1/references/contributionguidelines/_index.md
similarity index 99%
rename from content/v1/contributionguidelines/_index.md
rename to content/v1/references/contributionguidelines/_index.md
index e02b519065..427bd231af 100644
--- a/content/v1/contributionguidelines/_index.md
+++ b/content/v1/references/contributionguidelines/_index.md
@@ -1,7 +1,7 @@
---
title: "Contribution Guidelines"
linkTitle: "Contribution Guidelines"
-weight: 12
+weight: 3
Description: >
Dell Technologies (Dell) Container Storage Modules (CSM) docs Contribution Guidelines
---
diff --git a/content/v1/grasp/_index.md b/content/v1/references/learn/_index.md
similarity index 88%
rename from content/v1/grasp/_index.md
rename to content/v1/references/learn/_index.md
index f81a8d8e68..9facbd2d26 100644
--- a/content/v1/grasp/_index.md
+++ b/content/v1/references/learn/_index.md
@@ -1,5 +1,5 @@
---
title: Learn
Description: Brief tutorials on Devops, Kubernetes and containers
-weight: 10
+weight: 2
---
diff --git a/content/v1/grasp/start.md b/content/v1/references/learn/start.md
similarity index 100%
rename from content/v1/grasp/start.md
rename to content/v1/references/learn/start.md
diff --git a/content/v1/grasp/video.md b/content/v1/references/learn/video.md
similarity index 100%
rename from content/v1/grasp/video.md
rename to content/v1/references/learn/video.md
diff --git a/content/v1/references/policies/_index.md b/content/v1/references/policies/_index.md
new file mode 100644
index 0000000000..a5e2875d16
--- /dev/null
+++ b/content/v1/references/policies/_index.md
@@ -0,0 +1,7 @@
+---
+title: "Policies"
+linkTitle: "Policies"
+weight: 4
+Description: >
+ Dell Technologies (Dell) Container Storage Modules (CSM) Policies
+---
diff --git a/content/v1/policies/deprecationpolicy/_index.md b/content/v1/references/policies/deprecationpolicy/_index.md
similarity index 100%
rename from content/v1/policies/deprecationpolicy/_index.md
rename to content/v1/references/policies/deprecationpolicy/_index.md
diff --git a/content/v1/release/_index.md b/content/v1/release/_index.md
new file mode 100644
index 0000000000..97a5c32dc9
--- /dev/null
+++ b/content/v1/release/_index.md
@@ -0,0 +1,19 @@
+---
+title: "Release notes"
+linkTitle: "Release notes"
+weight: 10
+Description: >
+ Dell Container Storage Modules (CSM) release notes
+---
+
+Release notes for Container Storage Modules:
+
+[CSI Drivers](../csidriver/release)
+
+[CSM for Authorization](../authorization/release)
+
+[CSM for Observability](../observability/release)
+
+[CSM for Replication](../replication/release)
+
+[CSM for Resiliency](../resiliency/release)
\ No newline at end of file
diff --git a/content/v1/replication/_index.md b/content/v1/replication/_index.md
index cae6e7d45d..df4d1bb45c 100644
--- a/content/v1/replication/_index.md
+++ b/content/v1/replication/_index.md
@@ -30,8 +30,8 @@ CSM for Replication provides the following capabilities:
{{}}
| COP/OS | PowerMax | PowerStore | PowerScale |
|---------------|------------------|------------------|------------|
-| Kubernetes | 1.21, 1.22, 1.23 | 1.21, 1.22, 1.23 | 1.21, 1.22, 1.23 |
-| Red Hat OpenShift | 4.8, 4.9 | 4.8, 4.9 | 4.8, 4.9 |
+| Kubernetes | 1.22, 1.23, 1.24 | 1.22, 1.23, 1.24 | 1.22, 1.23, 1.24 |
+| Red Hat OpenShift | 4.9, 4.10 | 4.9, 4.10 | 4.9, 4.10 |
| RHEL | 7.x, 8.x | 7.x, 8.x | 7.x, 8.x |
| CentOS | 7.8, 7.9 | 7.8, 7.9 | 7.8, 7.9 |
| Ubuntu | 20.04 | 20.04 | 20.04 |
@@ -50,11 +50,11 @@ CSM for Replication provides the following capabilities:
CSM for Replication supports the following CSI drivers and versions.
{{}}
-| Storage Array | CSI Driver | Supported Versions |
-| ------------------------------ | -------------------------------------------------------- | ------------------ |
-| CSI Driver for Dell PowerMax | [csi-powermax](https://github.com/dell/csi-powermax) | v2.0, v2.1, v2.2 |
-| CSI Driver for Dell PowerStore | [csi-powerstore](https://github.com/dell/csi-powerstore) | v2.0, v2.1, v2.2 |
-| CSI Driver for Dell PowerScale | [csi-powerscale](https://github.com/dell/csi-powerscale) | v2.2 |
+| Storage Array | CSI Driver | Supported Versions |
+| ------------- | ---------- | ------------------ |
+| CSI Driver for Dell PowerMax | [csi-powermax](https://github.com/dell/csi-powermax) | v2.0 + |
+| CSI Driver for Dell PowerStore | [csi-powerstore](https://github.com/dell/csi-powerstore) | v2.0 + |
+| CSI Driver for Dell PowerScale | [csi-powerscale](https://github.com/dell/csi-powerscale) | v2.2 + |
{{
}}
## Details
@@ -74,6 +74,8 @@ the objects still exist in pairs.
* Start applications after the migration.
* Replicate `PersistentVolumeClaim` objects within/across clusters.
* Replication with METRO mode does not need Replicator sidecar and common controller.
+* Different namespaces cannot share the same RDF group for creating volumes with ASYNC mode for PowerMax.
+* Same RDF group cannot be shared across different replication modes for PowerMax.
### CSM for Replication Module Capabilities
@@ -94,9 +96,9 @@ The following matrix provides a list of all supported versions for each Dell Sto
| Platforms | PowerMax | PowerStore | PowerScale |
| ---------- | ----------------- | ---------------- | ---------------- |
-| Kubernetes | 1.21, 1.22, 1.23 | 1.21, 1.22, 1.23 | 1.21, 1.22, 1.23 |
-| RedHat Openshift |4.8, 4.9 | 4.8, 4.9 | 4.8, 4.9 |
-| CSI Driver | 2.x | 2.x | 2.2+ |
+| Kubernetes | 1.22, 1.23, 1.24 | 1.22, 1.23, 1.24 | 1.22, 1.23, 1.24 |
+| RedHat Openshift |4.9, 4.10 | 4.9, 4.10 | 4.9, 4.10 |
+| CSI Driver | 2.x(k8s),
2.2+(OpenShift)| 2.x | 2.2+ |
For compatibility with storage arrays please refer to corresponding [CSI drivers](../csidriver/#features-and-capabilities)
diff --git a/content/v1/replication/deployment/installation.md b/content/v1/replication/deployment/installation.md
index 3a30e17f5e..005637fac7 100644
--- a/content/v1/replication/deployment/installation.md
+++ b/content/v1/replication/deployment/installation.md
@@ -47,12 +47,15 @@ kubectl create ns dell-replication-controller
cp ../helm/csm-replication/values.yaml ./myvalues.yaml
bash scripts/install.sh --values ./myvalues.yaml
```
->Note: Current installation method allows you to specify custom `:` entry to be appended to controller's `/etc/hosts` file. It can be useful if controller is being deployed in private environment where DNS is not set up properly, but kubernetes clusters use FQDN as API server's address.
+>Note: Current installation method allows you to specify custom `:` entries to be appended to controller's `/etc/hosts` file. It can be useful if controller is being deployed in private environment where DNS is not set up properly, but kubernetes clusters use FQDN as API server's address.
> The feature can be enabled by modifying `values.yaml`.
>``` hostAliases:
-> enableHostAliases: true
-> hostName: "foo.bar"
-> ip: "10.10.10.10"
+> - ip: "10.10.10.10"
+> hostnames:
+> - "foo.bar"
+> - ip: "10.10.10.11"
+> hostnames:
+> - "foo.baz"
This script will do the following:
1. Install `DellCSIReplicationGroup` CRD in your cluster
diff --git a/content/v1/replication/high-availability.md b/content/v1/replication/high-availability.md
index 447036e440..1f2d9b7fe2 100644
--- a/content/v1/replication/high-availability.md
+++ b/content/v1/replication/high-availability.md
@@ -46,6 +46,9 @@ reclaimPolicy: Delete
volumeBindingMode: Immediate
```
+> Note: Different namespaces can share the same RDF group for creating volumes.
+
+
### Snapshots on SRDF Metro volumes
A snapshot can be created on either of the volumes in the metro volume pair depending on the parameters in the `VolumeSnapshotClass`.
The snapshots are by default created on the volumes on the R1 side of the SRDF metro pair, but if a Symmetrix id is specified in the `VolumeSnapshotClass` parameters, the driver creates the snapshot on the specified array; the specified array can either be the R1 or the R2 array. A `VolumeSnapshotClass` with symmetrix id specified in parameters may look as follows:
@@ -59,4 +62,4 @@ driver: driver.dellemc.com
deletionPolicy: Delete
parameters:
SYMID: '000000000001'
-```
\ No newline at end of file
+```
diff --git a/content/v1/replication/migrating-volumes.md b/content/v1/replication/migrating-volumes.md
new file mode 100644
index 0000000000..da524dc314
--- /dev/null
+++ b/content/v1/replication/migrating-volumes.md
@@ -0,0 +1,145 @@
+---
+title: Migrating Volumes
+linktitle: Migrating Volumes
+weight: 6
+description: >
+ Migrating Volumes Between Storage Classes
+---
+
+You can migrate existing pre-provisioned volumes to another storage class by using volume migration feature.
+
+As of CSM 1.3 two versions of migration are supported:
+- To replicated storage class from NON replicated one
+- To NON replicated storage class from replicated one
+
+## Prerequisites
+- Original volume is from the one of currently supported CSI drivers (see Support Matrix)
+- Migrated sidecar is installed alongside with the driver, you can enable it in your `myvalues.yaml` file
+```yaml
+migration:
+ enabled: true
+```
+
+## Support Matrix
+| Migration Type | PowerMax | PowerStore | PowerScale | PowerFlex | Unity |
+| - | - | - | - | - | - |
+| NON_REPL_TO_REPL | Yes | No | No | No | No |
+| REPL_TO_NON_REPL | Yes | No | No | No | No |
+
+
+## Basic Usage
+
+To trigger migration procedure, you need to patch existing PersistentVolume with migration annotation (by default `migration.storage.dell.com/migrate-to`) and in value of said annotation specify StorageClass name you want to migrate to.
+
+For example, if we have PV named `test-pv` already provisioned and we want to migrate it to replicated storage class named `powermax-replication` we can run:
+
+```shell
+kubectl patch pv test-pv -p '{"metadata": {"annotations":{"migration.storage.dell.com/migrate-to":"powermax-replication"}}}'
+```
+
+Patching PV resource will trigger migration sidecar that will call `VolumeMigrate` call from the CSI driver. After migration is finished new PersistentVolume will be created in cluster with name of original PV plus `-to-` appended to it.
+
+In our example, we will see this when running `kubectl get pv`:
+```shell
+NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
+test-pv 1Gi RWO Retain Bound default/test-pvc powermax 5m
+test-pv-to-powermax-replication 1Gi RWO Retain Available powermax-replication 10s
+
+```
+
+When Volume Migration is finished source PV will be updated with EVENT that denotes that this has taken place.
+
+Newly created PV (`test-pv-to-powermax-replication` in our example) is available for consumption via static provisioning by any PVC that will request it.
+
+
+## Namespace Considerations For Replication
+
+Replication Groups in CSM Replication can be made namespaced, meaning that one SC will generate one Replication Group per namespace. This is also important when migrating volumes from/to replcation storage class.
+
+When just setting one annotation `migration.storage.dell.com/migrate-to` migrated volume is assumed to be used in same namespace as original PV and it's PVC. In the case of being migrated to replication enabled storage class will be inserted in namespaced Replication Group inside PVC namespace.
+
+However, you can define in which namespace migrated volume must be used after migration by setting `migration.storage.dell.com/namespace`. You can use the same annotation in a scenario where you only have a statically provisioned PV, and you don't have it bound to any PVC, and you want to migrate it to another storage class.
+
+
+## Non Disruptive Migration
+
+You can migrate your PVs without disrupting workflows if you use StatefulSet with multiple replicas to deploy application.
+
+Instruction (you can also use `repctl` for convenience):
+
+1. Find every PV for your StatefulSet and patch it with `migration.storage.dell.com/migrate-to` annotation that points to new storage class
+```shell
+kubectl patch pv -p '{"metadata": {"annotations":{"migration.storage.dell.com/migrate-to":"powermax-replication"}}}'
+```
+
+2. Ensure you have a copy of StatefulSet manifest somewhere ready, we will need it later. If you don't have it, you can get it from cluster
+```shell
+kubectl get sts -n -o yaml > sts-manifest.yaml
+```
+
+3. To not disrupt any workflows we will need to delete StatefulSet without deleting any pods, to do so you can use `--cascade` flag
+```shell
+kubectl delete sts -n --cascade=orphan
+```
+
+4. Change StorageClass in your manifest of StatefulSet to point to a new storage class, then apply it to the cluster
+```shell
+kubectl apply -f sts-manifest.yaml
+```
+
+5. Find a PVC and pod of one replica of StatefulSet delete PVCs first and Pod after it
+```shell
+kubectl delete pvc -n
+```
+```shell
+kubectl delete pod -n
+```
+
+Wait for new pod to be created by StatefulSet, it should create new PVC that will use migrated PV.
+
+6. Repeat step 5 until all replicas use new PVCs
+
+
+## Using repctl
+
+You can use `repctl` CLI tool to help you simplify running migration specific commands.
+
+### Single PV
+
+In most its basic form repctl can do the same as kubectl, for example, migrating single PV from our example will look like:
+
+```shell
+./repctl migrate pv test-pv --to-sc powermax-replication
+```
+
+`repctl` will go and patch the resource for you. You can also provide `--wait` flag for it to wait until migrated PV is created in cluster.
+`repctl` also can set `migration.storage.dell.com/namespace` for you if you provide `--target-ns` flag.
+
+
+Aside from just migrating single PVs repctl can migrate PVCs and StatefulSets.
+
+### PVC
+
+`repctl` can find PV for any given PVC for you and patch it.
+This could be done with similar command to single PV migration:
+
+```shell
+./repctl migrate pvc test-pvc --to-sc powermax-replication -n default
+```
+
+Notice that we provide original namespace (`default` in our example) for this command because PVCs are namespaced resource and we need namespace to be able to find it.
+
+
+### StatefulSet
+
+
+`repctl` can help you migrate entire StatefulSet by automating migration process.
+
+You can use this command to do so:
+```shell
+./repctl migrate sts test-sts --to-sc powermax-replication -n default
+```
+
+By default, it will find every Pod, PVC and PV for provided StatefulSet and patch every PV with annotation.
+
+You can also optionally provide `--ndu` flag, with this flag provided repctl will do steps provided in [Non Disruptive Migration](#non-disruptive-migration) section automatically.
diff --git a/content/v1/replication/release/_index.md b/content/v1/replication/release/_index.md
new file mode 100644
index 0000000000..9d19354c4f
--- /dev/null
+++ b/content/v1/replication/release/_index.md
@@ -0,0 +1,26 @@
+---
+title: "Release notes"
+linkTitle: "Release notes"
+weight: 9
+Description: >
+ Dell Container Storage Modules (CSM) release notes for replication
+---
+
+## Release Notes - CSM Replication 1.3.0
+
+### New Features/Changes
+- Added support for Kubernetes 1.24
+- Added support for OpenShift 4.10
+- Added volume upgrade/downgrade functionality for replication volumes
+
+
+### Fixed Issues
+- Fixed panic occuring when encountering PVC with empty StorageClass
+- PV and RG retention policy checks are no longer case sensitive
+- RG will now display EMPTY link state when no PV found
+- [`PowerScale`] Running `reprotect` action on source cluster after failover no longer puts RG into UNKNOWN state
+- [`PowerScale`] Deleting RG will break replication link before trying to delete group on array
+
+### Known Issues
+
+There are no known issues in this release.
diff --git a/content/v1/resiliency/_index.md b/content/v1/resiliency/_index.md
index 7ccb890831..ab043bc23d 100644
--- a/content/v1/resiliency/_index.md
+++ b/content/v1/resiliency/_index.md
@@ -27,30 +27,30 @@ Accordingly, CSM for Resiliency is adapted to and qualified with each CSI driver
CSM for Resiliency provides the following capabilities:
{{}}
-| Capability | PowerScale | Unity | PowerStore | PowerFlex | PowerMax |
-| --------------------------------------- | :--------: | :---: | :--------: | :-------: | :------: |
-| Detect pod failures when: Node failure, K8S Control Plane Network failure, K8S Control Plane failure, Array I/O Network failure | no | yes | no | yes | no |
-| Cleanup pod artifacts from failed nodes | no | yes | no | yes | no |
-| Revoke PV access from failed nodes | no | yes | no | yes | no |
+| Capability | PowerScale | Unity XT | PowerStore | PowerFlex | PowerMax |
+| --------------------------------------- | :--------: | :------: | :--------: | :-------: | :------: |
+| Detect pod failures when: Node failure, K8S Control Plane Network failure, K8S Control Plane failure, Array I/O Network failure | yes | yes | no | yes | no |
+| Cleanup pod artifacts from failed nodes | yes | yes | no | yes | no |
+| Revoke PV access from failed nodes | yes | yes | no | yes | no |
{{
}}
## Supported Operating Systems/Container Orchestrator Platforms
{{}}
-| COP/OS | Supported Versions |
-| ---------- | :----------------: |
-| Kubernetes | 1.21, 1.22, 1.23 |
-| Red Hat OpenShift | 4.8, 4.9 |
-| RHEL | 7.x, 8.x |
-| CentOS | 7.8, 7.9 |
+| COP/OS | Supported Versions |
+| ----------------- | :----------------: |
+| Kubernetes | 1.22, 1.23, 1.24 |
+| Red Hat OpenShift | 4.9, 4.10 |
+| RHEL | 7.x, 8.x |
+| CentOS | 7.8, 7.9 |
{{
}}
## Supported Storage Platforms
{{}}
-| | PowerFlex | Unity |
-| ------------- | :----------: | :------------------------: |
-| Storage Array | 3.5.x, 3.6.x | 5.0.5, 5.0.6, 5.0.7, 5.1.0, 5.1.2 |
+| | PowerFlex | Unity XT | PowerScale |
+| ------------- | :----------: | :-------------------------------: | :-------------------------------------: |
+| Storage Array | 3.5.x, 3.6.x | 5.0.5, 5.0.6, 5.0.7, 5.1.0, 5.1.2 | OneFS 8.1, 8.2, 9.0, 9.1, 9.2, 9.3, 9.4 |
{{
}}
## Supported CSI Drivers
@@ -59,30 +59,39 @@ CSM for Resiliency supports the following CSI drivers and versions.
{{}}
| Storage Array | CSI Driver | Supported Versions |
| --------------------------------- | :----------: | :----------------: |
-| CSI Driver for Dell PowerFlex | [csi-powerflex](https://github.com/dell/csi-powerflex) | v2.0, v2.1, v2.2 |
-| CSI Driver for Dell Unity | [csi-unity](https://github.com/dell/csi-unity) | v2.0, v2.1, v2.2 |
+| CSI Driver for Dell PowerFlex | [csi-powerflex](https://github.com/dell/csi-powerflex) | v2.0.0 + |
+| CSI Driver for Dell Unity XT | [csi-unity](https://github.com/dell/csi-unity) | v2.0.0 + |
+| CSI Driver for Dell PowerScale | [csi-powerscale](https://github.com/dell/csi-powerscale) | v2.3.0 + |
{{
}}
### PowerFlex Support
-PowerFlex is a highly scalable array that is very well suited to Kubernetes deployments. The CSM for Resiliency support for PowerFlex leverages the following PowerFlex features:
+PowerFlex is a highly scalable array that is very well suited to Kubernetes deployments. The CSM for Resiliency support for PowerFlex leverages these PowerFlex features:
* Very quick detection of Array I/O Network Connectivity status changes (generally takes 1-2 seconds for the array to detect changes)
* A robust mechanism if Nodes are doing I/O to volumes (sampled over a 5-second period).
* Low latency REST API supports fast CSI provisioning and de-provisioning operations.
* A proprietary network protocol provided by the SDC component that can run over the same IP interface as the K8S control plane or over a separate IP interface for Array I/O.
-### Unity Support
+### Unity XT Support
-Dell Unity is targeted for midsized deployments, remote or branch offices, and cost-sensitive mixed workloads. Unity systems are designed for all-Flash, deliver the best value in the market, and are available in purpose-built (all Flash or hybrid Flash), converged deployment options (through VxBlock), and software-defined virtual edition.
+Dell Unity XT is targeted for midsized deployments, remote or branch offices, and cost-sensitive mixed workloads. Unity XT systems are designed to deliver the best value in the market. They support all-Flash, and are available in purpose-built (all Flash or hybrid Flash), converged deployment options (through VxBlock), and software-defined virtual edition.
-* Unity (purpose-built): A modern midrange storage solution, engineered from the groundup to meet market demands for Flash, affordability and incredible simplicity. The Unity Family is available in 12 All Flash models and 12 Hybrid models.
-* VxBlock (converged): Unity storage options are also available in Dell VxBlock System 1000.
-* UnityVSA (virtual): The Unity Virtual Storage Appliance (VSA) allows the advanced unified storage and data management features of the Unity family to be easily deployed on VMware ESXi servers, for a ‘software defined’ approach. UnityVSA is available in two editions:
+* Unity XT (purpose-built): A modern midrange storage solution, engineered from the groundup to meet market demands for Flash, affordability and incredible simplicity. The Unity XT Family is available in 12 All Flash models and 12 Hybrid models.
+* VxBlock (converged): Unity XT storage options are also available in Dell VxBlock System 1000.
+* UnityVSA (virtual): The Unity XT Virtual Storage Appliance (VSA) allows the advanced unified storage and data management features of the Unity XT family to be easily deployed on VMware ESXi servers. This allows for a ‘software defined’ approach. UnityVSA is available in two editions:
* Community Edition is a free downloadable 4 TB solution recommended for nonproduction use.
* Professional Edition is a licensed subscription-based offering available at capacity levels of 10 TB, 25 TB, and 50 TB. The subscription includes access to online support resources, EMC Secure Remote Services (ESRS), and on-call software- and systems-related support.
-All three deployment options, i.e. Unity, UnityVSA, and Unity-based VxBlock, enjoy one architecture, one interface with consistent features and rich data services.
+All three deployment options, Unity XT, UnityVSA, and Unity-based VxBlock, enjoy one architecture, one interface with consistent features and rich data services.
+
+### PowerScale Support
+
+PowerScale is a highly scalable NFS array that is very well suited to Kubernetes deployments. The CSM for Resiliency support for PowerScale leverages the following PowerScale features:
+
+* Detection of Array I/O Network Connectivity status changes.
+* A robust mechanism to detect if Nodes are actively doing I/O to volumes.
+* Low latency REST API supports fast CSI provisioning and de-provisioning operations.
## Limitations and Exclusions
@@ -97,11 +106,11 @@ The following provisioning types are supported and have been tested:
* Use of the above volumes with Pods created by StatefulSets.
* Up to 12 or so protected pods on a given node.
* Failing up to 3 nodes at a time in 9 worker node clusters, or failing 1 node at a time in smaller clusters. Application recovery times are dependent on the number of pods that need to be moved as a result of the failure. See the section on "Testing and Performance" for some of the details.
+* Multi-array are supported. In case of CSI Driver for PowerScale and CSI Driver for Unity, if any one of the array is not connected, the array connectivity will be false. CSI Driver for Powerflex connectivity will be determined by connection to default array.
### Not Tested But Assumed to Work
* Deployments with the above volume types, provided two pods from the same deployment do not reside on the same node. At the current time anti-affinity rules should be used to guarantee no two pods accessing the same volumes are scheduled to the same node.
-* Multi-array support
### Not Yet Tested or Supported
diff --git a/content/v1/resiliency/deployment.md b/content/v1/resiliency/deployment.md
index 6da570dfd5..8a4a20519f 100644
--- a/content/v1/resiliency/deployment.md
+++ b/content/v1/resiliency/deployment.md
@@ -10,7 +10,9 @@ CSM for Resiliency is installed as part of the Dell CSI driver installation. The
For information on the PowerFlex CSI driver, see [PowerFlex CSI Driver](https://github.com/dell/csi-powerflex).
-For information on the Unity CSI driver, see [Unity CSI Driver](https://github.com/dell/csi-unity).
+For information on the Unity XT CSI driver, see [Unity XT CSI Driver](https://github.com/dell/csi-unity).
+
+For information on the PowerScale CSI driver, see [PowerScale CSI Driver](https://github.com/dell/csi-powerscale).
Configure all the helm chart parameters described below before installing the drivers.
@@ -23,7 +25,7 @@ The drivers that support Helm chart installation allow CSM for Resiliency to be
# Enable this feature only after contact support for additional information
podmon:
enabled: true
- image: dellemc/podmon:v1.1.0
+ image: dellemc/podmon:v1.2.0
controller:
args:
- "--csisock=unix:/var/run/csi/csi.sock"
@@ -31,6 +33,7 @@ podmon:
- "--mode=controller"
- "--skipArrayConnectionValidation=false"
- "--driver-config-params=/vxflexos-config-params/driver-config-params.yaml"
+ - "--driverPodLabelValue=dell-storage"
node:
args:
- "--csisock=unix:/var/lib/kubelet/plugins/vxflexos.emc.dell.com/csi_sock"
@@ -38,6 +41,7 @@ podmon:
- "--mode=node"
- "--leaderelection=false"
- "--driver-config-params=/vxflexos-config-params/driver-config-params.yaml"
+ - "--driverPodLabelValue=dell-storage"
```
@@ -58,8 +62,8 @@ To install CSM for Resiliency with the driver, the following changes are require
| leaderelection | Required | Boolean value that should be set true for controller and false for node. The default value is true. | controller & node |
| skipArrayConnectionValidation | Optional | Boolean value that if set to true will cause controllerPodCleanup to skip the validation that no I/O is ongoing before cleaning up the pod. If set to true will cause controllerPodCleanup on K8S Control Plane failure (kubelet service down). | controller |
| labelKey | Optional | String value that sets the label key used to denote pods to be monitored by CSM for Resiliency. It will make life easier if this key is the same for all driver types, and drivers are differentiated by different labelValues (see below). If the label keys are the same across all drivers you can do `kubectl get pods -A -l labelKey` to find all the CSM for Resiliency protected pods. labelKey defaults to "podmon.dellemc.com/driver". | controller & node |
-| labelValue | Required | String that sets the value that denotes pods to be monitored by CSM for Resiliency. This must be specific for each driver. Defaults to "csi-vxflexos" for CSI Driver for Dell PowerFlex and "csi-unity" for CSI Driver for Dell Unity | controller & node |
-| arrayConnectivityPollRate | Optional | The minimum polling rate in seconds to determine if the array has connectivity to a node. Should not be set to less than 5 seconds. See the specific section for each array type for additional guidance. | controller |
+| labelValue | Required | String that sets the value that denotes pods to be monitored by CSM for Resiliency. This must be specific for each driver. Defaults to "csi-vxflexos" for CSI Driver for Dell PowerFlex and "csi-unity" for CSI Driver for Dell Unity XT | controller & node |
+| arrayConnectivityPollRate | Optional | The minimum polling rate in seconds to determine if the array has connectivity to a node. Should not be set to less than 5 seconds. See the specific section for each array type for additional guidance. | controller & node |
| arrayConnectivityConnectionLossThreshold | Optional | Gives the number of failed connection polls that will be deemed to indicate array connectivity loss. Should not be set to less than 3. See the specific section for each array type for additional guidance. | controller |
| driver-config-params | Required | String that set the path to a file containing configuration parameter(for instance, Log levels) for a driver. | controller & node |
@@ -75,24 +79,26 @@ podmon:
enabled: true
controller:
args:
- - "-csisock=unix:/var/run/csi/csi.sock"
- - "-labelvalue=csi-vxflexos"
- - "-mode=controller"
- - "-arrayConnectivityPollRate=5"
- - "-arrayConnectivityConnectionLossThreshold=3"
+ - "--csisock=unix:/var/run/csi/csi.sock"
+ - "--labelvalue=csi-vxflexos"
+ - "--mode=controller"
+ - "--arrayConnectivityPollRate=5"
+ - "--arrayConnectivityConnectionLossThreshold=3"
- "--skipArrayConnectionValidation=false"
- "--driver-config-params=/vxflexos-config-params/driver-config-params.yaml"
+ - "--driverPodLabelValue=dell-storage"
node:
args:
- - "-csisock=unix:/var/lib/kubelet/plugins/vxflexos.emc.dell.com/csi_sock"
- - "-labelvalue=csi-vxflexos"
- - "-mode=node"
- - "-leaderelection=false"
+ - "--csisock=unix:/var/lib/kubelet/plugins/vxflexos.emc.dell.com/csi_sock"
+ - "--labelvalue=csi-vxflexos"
+ - "--mode=node"
+ - "--leaderelection=false"
- "--driver-config-params=/vxflexos-config-params/driver-config-params.yaml"
+ - "--driverPodLabelValue=dell-storage"
```
-## Unity Specific Recommendations
+## Unity XT Specific Recommendations
Here is a typical installation used for testing:
@@ -102,28 +108,60 @@ podmon:
enabled: true
controller:
args:
- - "-csisock=unix:/var/run/csi/csi.sock"
- - "-labelvalue=csi-unity"
- - "-driverPath=csi-unity.dellemc.com"
- - "-mode=controller"
+ - "--csisock=unix:/var/run/csi/csi.sock"
+ - "--labelvalue=csi-unity"
+ - "--driverPath=csi-unity.dellemc.com"
+ - "--mode=controller"
- "--skipArrayConnectionValidation=false"
- "--driver-config-params=/unity-config/driver-config-params.yaml"
+ - "--driverPodLabelValue=dell-storage"
node:
args:
- - "-csisock=unix:/var/lib/kubelet/plugins/unity.emc.dell.com/csi_sock"
- - "-labelvalue=csi-unity"
- - "-driverPath=csi-unity.dellemc.com"
- - "-mode=node"
- - "-leaderelection=false"
+ - "--csisock=unix:/var/lib/kubelet/plugins/unity.emc.dell.com/csi_sock"
+ - "--labelvalue=csi-unity"
+ - "--driverPath=csi-unity.dellemc.com"
+ - "--mode=node"
+ - "--leaderelection=false"
- "--driver-config-params=/unity-config/driver-config-params.yaml"
+ - "--driverPodLabelValue=dell-storage"
+
+```
+
+## PowerScale Specific Recommendations
+
+Here is a typical installation used for testing:
+```yaml
+podmon:
+ image: dellemc/podmon
+ enabled: true
+ controller:
+ args:
+ - "--csisock=unix:/var/run/csi/csi.sock"
+ - "--labelvalue=csi-isilon"
+ - "--arrayConnectivityPollRate=60"
+ - "--driverPath=csi-isilon.dellemc.com"
+ - "--mode=controller"
+ - "--skipArrayConnectionValidation=false"
+ - "--driver-config-params=/csi-isilon-config-params/driver-config-params.yaml"
+ - "--driverPodLabelValue=dell-storage"
+ node:
+ args:
+ - "--csisock=unix:/var/lib/kubelet/plugins/csi-isilon/csi_sock"
+ - "--labelvalue=csi-isilon"
+ - "--arrayConnectivityPollRate=60"
+ - "--driverPath=csi-isilon.dellemc.com"
+ - "--mode=node"
+ - "--leaderelection=false"
+ - "--driver-config-params=/csi-isilon-config-params/driver-config-params.yaml"
+ - "--driverPodLabelValue=dell-storage"
```
## Dynamic parameters
CSM for Resiliency has configuration parameters that can be updated dynamically, such as the logging level and format. This can be
-done by editing the DellEMC CSI Driver's parameters ConfigMap. The ConfigMap can be queried using kubectl.
-For example, the DellEMC Powerflex CSI Driver ConfigMaps can be found using the following command: `kubectl get -n vxflexos configmap`.
+done by editing the Dell CSI Driver's parameters ConfigMap. The ConfigMap can be queried using kubectl.
+For example, the Dell Powerflex CSI Driver ConfigMaps can be found using this command: `kubectl get -n vxflexos configmap`.
The ConfigMap to edit will have this pattern: -config-params (e.g., `vxflexos-config-params`).
To update or add parameters, you can use the `kubectl edit` command. For example, `kubectl edit -n vxflexos configmap vxflexos-config-params`.
diff --git a/content/v1/resiliency/release/_index.md b/content/v1/resiliency/release/_index.md
new file mode 100644
index 0000000000..3beec86748
--- /dev/null
+++ b/content/v1/resiliency/release/_index.md
@@ -0,0 +1,21 @@
+---
+title: "Release notes"
+linkTitle: "Release notes"
+weight: 1
+Description: >
+ Dell Container Storage Modules (CSM) release notes for resiliency
+---
+
+## Release Notes - CSM Resiliency 1.2.0
+
+### New Features/Changes
+
+- Support for node taint when driver pod is unhealthy.
+- Resiliency protection on driver node pods, see [CSI node failure protection](https://github.com/dell/csm/issues/145).
+- Resiliency support for CSI Driver for PowerScale, see [CSI Driver for PowerScale](https://github.com/dell/csm/issues/262).
+
+### Fixed Issues
+
+- Occasional failure unmounting Unity volume for raw block devices via iSCSI, see [unmounting Unity volume](https://github.com/dell/csm/issues/237).
+
+### Known Issues
\ No newline at end of file
diff --git a/content/v1/resiliency/upgrade.md b/content/v1/resiliency/upgrade.md
index 4466c77cc6..a8cc56a9c2 100644
--- a/content/v1/resiliency/upgrade.md
+++ b/content/v1/resiliency/upgrade.md
@@ -10,7 +10,9 @@ CSM for Resiliency can be upgraded as part of the Dell CSI driver upgrade proces
For information on the PowerFlex CSI driver upgrade process, see [PowerFlex CSI Driver](../../csidriver/upgradation/drivers/powerflex).
-For information on the Unity CSI driver upgrade process, see [Unity CSI Driver](../../csidriver/upgradation/drivers/unity).
+For information on the Unity XT CSI driver upgrade process, see [Unity XT CSI Driver](../../csidriver/upgradation/drivers/unity).
+
+For information on the PowerScale CSI driver upgrade process, see [PowerScale CSI Driver](../../csidriver/upgradation/drivers/isilon).
## Helm Chart Upgrade
diff --git a/content/v1/resiliency/usecases.md b/content/v1/resiliency/usecases.md
index daac595325..22ce18aae0 100644
--- a/content/v1/resiliency/usecases.md
+++ b/content/v1/resiliency/usecases.md
@@ -38,3 +38,5 @@ CSM for Resiliency's design is focused on detecting the following types of hardw
3. Array I/O Network failure is detected by polling the array to determine if the array has a healthy connection to the node. The capabilities to do this vary greatly by array and communication protocol type (Fibre Channel, iSCSI, NFS, NVMe, or PowerFlex SDC IP protocol). By monitoring the Array I/O Network separately from the Control Plane Network, CSM for Resiliency has two different indicators of whether the node is healthy or not.
4. K8S Control Plane Failure. Control Plane Failure is defined as failure of kubelet in a given node. K8S Control Plane failures are generally discovered by receipt of a Node event with a NoSchedule or NoExecute taint, or detection of such a taint when retrieving the Node via the K8S API.
+
+5. CSI Driver node pods. CSM for Resiliency monitors CSI driver node pods.If for any reason the CSI Driver node pods fail and enter the Not Ready state, it will taint the node with NoSchedule value. This will disable kubernetes scheduler to schedule new workloads on the given node, hence avoid workloads that needed CSI Driver pods to be in Ready state.
diff --git a/content/v1/snapshots/volume-group-snapshots/_index.md b/content/v1/snapshots/volume-group-snapshots/_index.md
new file mode 100644
index 0000000000..c266498bef
--- /dev/null
+++ b/content/v1/snapshots/volume-group-snapshots/_index.md
@@ -0,0 +1,51 @@
+---
+title: "Volume Group Snapshots"
+linkTitle: "Volume Group Snapshots"
+weight: 8
+Description: >
+ Volume Group Snapshot module of Dell CSI drivers
+---
+## Volume Group Snapshot Feature
+
+In order to use Volume Group Snapshots, ensure the volume snapshot module is enabled.
+- Kubernetes Volume Snapshot CRDs
+- Volume Snapshot Controller
+- Volume Snapshot Class
+
+### Creating Volume Group Snapshots
+This is a sample manifest for creating a Volume Group Snapshot:
+```yaml
+apiVersion: volumegroup.storage.dell.com/v1
+kind: DellCsiVolumeGroupSnapshot
+metadata:
+ name: "vgs-test"
+ namespace: "test"
+spec:
+ # Add fields here
+ driverName: "csi-.dellemc.com" # Example: "csi-powerstore.dellemc.com"
+ # defines how to process VolumeSnapshot members when volume group snapshot is deleted
+ # "Retain" - keep VolumeSnapshot instances
+ # "Delete" - delete VolumeSnapshot instances
+ memberReclaimPolicy: "Retain"
+ volumesnapshotclass: ""
+ pvcLabel: "vgs-snap-label"
+ # pvcList:
+ # - "pvcName1"
+ # - "pvcName2"
+```
+
+The PVC labels field specifies a label that must be present in PVCs that are to be snapshotted. Here is a sample of that portion of a .yaml for a PVC:
+
+```yaml
+metadata:
+ name: volume1
+ namespace: test
+ labels:
+ volume-group: vgs-snap-label
+```
+
+More details about the installation and use of the VolumeGroup Snapshotter can be found here: [dell-csi-volumegroup-snapshotter](https://github.com/dell/csi-volumegroup-snapshotter).
+
+>Note: Volume group cannot be seen from the Kubernetes level as of now only volume group snapshots can be viewed as a CRD
+
+>Volume Group Snapshots feature is supported with Helm.
diff --git a/content/v2/_index.md b/content/v2/_index.md
index 68f876afee..181e677e61 100644
--- a/content/v2/_index.md
+++ b/content/v2/_index.md
@@ -17,23 +17,23 @@ CSM is made up of multiple components including modules (enterprise capabilities
## CSM Supported Modules and Dell CSI Drivers
-| Modules/Drivers | CSM 1.2 | [CSM 1.1](../v1/) | [CSM 1.0.1](../v1/) | [CSM 1.0](../v2/) |
+| Modules/Drivers | CSM 1.2.1 | [CSM 1.2](../v1/) | [CSM 1.1](../v1/) | [CSM 1.0.1](../v2/) |
| - | :-: | :-: | :-: | :-: |
-| [Authorization](https://hub.docker.com/r/dellemc/csm-authorization-sidecar) | 1.2 | 1.1 | 1.0 | 1.0 |
-| [Observability](https://hub.docker.com/r/dellemc/csm-topology) | 1.1 | 1.0.1 | 1.0.1 | 1.0 |
-| [Replication](https://hub.docker.com/r/dellemc/dell-csi-replicator) | 1.2 | 1.1 | 1.0 | 1.0 |
-| [Resiliency](https://hub.docker.com/r/dellemc/podmon) | 1.1 | 1.0.1 | 1.0.1 | 1.0 |
-| [CSI Driver for PowerScale](https://hub.docker.com/r/dellemc/csi-isilon/tags) | v2.2 | v2.1 | v2.0 | v2.0 |
-| [CSI Driver for Unity](https://hub.docker.com/r/dellemc/csi-unity/tags) | v2.2 | v2.1 | v2.0 | v2.0 |
-| [CSI Driver for PowerStore](https://hub.docker.com/r/dellemc/csi-powerstore/tags) | v2.2 | v2.1 | v2.0 | v2.0 |
-| [CSI Driver for PowerFlex](https://hub.docker.com/r/dellemc/csi-vxflexos/tags) | v2.2 | v2.1 | v2.0 | v2.0 |
-| [CSI Driver for PowerMax](https://hub.docker.com/r/dellemc/csi-powermax/tags) | v2.2 | v2.1 | v2.0 | v2.0 |
+| [Authorization](https://hub.docker.com/r/dellemc/csm-authorization-sidecar) | 1.2 | 1.2 | 1.1 | 1.0 |
+| [Observability](https://hub.docker.com/r/dellemc/csm-topology) | 1.1.1 | 1.1 | 1.0.1 | 1.0.1 |
+| [Replication](https://hub.docker.com/r/dellemc/dell-csi-replicator) | 1.2 | 1.2 | 1.1 | 1.0 |
+| [Resiliency](https://hub.docker.com/r/dellemc/podmon) | 1.1 | 1.1 | 1.0.1 | 1.0.1 |
+| [CSI Driver for PowerScale](https://hub.docker.com/r/dellemc/csi-isilon/tags) | v2.2 | v2.2 | v2.1 | v2.0 |
+| [CSI Driver for Unity](https://hub.docker.com/r/dellemc/csi-unity/tags) | v2.2 | v2.2 | v2.1 | v2.0 |
+| [CSI Driver for PowerStore](https://hub.docker.com/r/dellemc/csi-powerstore/tags) | v2.2 | v2.2 | v2.1 | v2.0 |
+| [CSI Driver for PowerFlex](https://hub.docker.com/r/dellemc/csi-vxflexos/tags) | v2.2 | v2.2 | v2.1 | v2.0 |
+| [CSI Driver for PowerMax](https://hub.docker.com/r/dellemc/csi-powermax/tags) | v2.2 | v2.2 | v2.1 | v2.0 |
## CSM Modules Support Matrix for Dell CSI Drivers
| CSM Module | CSI PowerFlex v2.2 | CSI PowerScale v2.2 | CSI PowerStore v2.2 | CSI PowerMax v2.2 | CSI Unity XT v2.2 |
| ----------------- | -------------- | --------------- | --------------- | ------------- | --------------- |
| Authorization v1.2| ✔️ | ✔️ | ❌ | ✔️ | ❌ |
-| Observability v1.1| ✔️ | ❌ | ✔️ | ❌ | ❌ |
+| Observability v1.1.1 | ✔️ | ❌ | ✔️ | ❌ | ❌ |
| Replication v1.2| ❌ | ✔️ | ✔️ | ✔️ | ❌ |
| Resilency v1.1| ✔️ | ❌ | ❌ | ❌ | ✔️ |
\ No newline at end of file
diff --git a/content/v2/csidriver/features/powermax.md b/content/v2/csidriver/features/powermax.md
index 55a57131c9..a635b79ec6 100644
--- a/content/v2/csidriver/features/powermax.md
+++ b/content/v2/csidriver/features/powermax.md
@@ -78,6 +78,8 @@ spec:
### Creating PVCs with PVCs as source
+This is not supported for replicated volumes.
+
This is a sample manifest for creating a PVC with another PVC as a source:
```yaml
apiVersion: v1
@@ -158,6 +160,8 @@ To install multiple CSI drivers, follow these steps:
Starting in v1.4, the CSI PowerMax driver supports the expansion of Persistent Volumes (PVs). This expansion is done online, which is when the PVC is attached to any node.
+>Note: This feature is not supported for replicated volumes.
+
To use this feature, enable in `values.yaml`
```yaml
diff --git a/content/v2/csidriver/installation/offline/_index.md b/content/v2/csidriver/installation/offline/_index.md
index 59a7c082f3..07b0000bdb 100644
--- a/content/v2/csidriver/installation/offline/_index.md
+++ b/content/v2/csidriver/installation/offline/_index.md
@@ -65,10 +65,10 @@ The resulting offline bundle file can be copied to another machine, if necessary
For example, here is the output of a request to build an offline bundle for the Dell CSI Operator:
```
-git clone https://github.com/dell/dell-csi-operator.git
+git clone -b v1.7.0 https://github.com/dell/dell-csi-operator.git
```
```
-cd dell-csi-operator
+cd dell-csi-operator/scripts
```
```
[root@user scripts]# ./csi-offline-bundle.sh -c
diff --git a/content/v2/csidriver/installation/operator/_index.md b/content/v2/csidriver/installation/operator/_index.md
index 71140cd643..be62fc2dec 100644
--- a/content/v2/csidriver/installation/operator/_index.md
+++ b/content/v2/csidriver/installation/operator/_index.md
@@ -97,10 +97,9 @@ $ kubectl create configmap dell-csi-operator-config --from-file config.tar.gz -n
#### Steps
>**Skip step 1 for "offline bundle installation" and continue using the workspace created by untar of dell-csi-operator-bundle.tar.gz.**
-1. Clone the [Dell CSI Operator repository](https://github.com/dell/dell-csi-operator).
+1. Clone and checkout the required dell-csi-operator version using `git clone -b v1.7.0 https://github.com/dell/dell-csi-operator.git`.
2. cd dell-csi-operator
-3. git checkout dell-csi-operator-`your-version'
-4. Run `bash scripts/install.sh` to install the operator.
+3. Run `bash scripts/install.sh` to install the operator.
>NOTE: Dell CSI Operator version 1.4.0 and higher would install to the 'dell-csi-operator' namespace by default.
Any existing installations of Dell CSI Operator (v1.2.0 or later) installed using `install.sh` to the 'default' or 'dell-csi-operator' namespace can be upgraded to the new version by running `install.sh --upgrade`.
diff --git a/content/v2/csidriver/installation/test/powermax.md b/content/v2/csidriver/installation/test/powermax.md
index 01b87aca59..f1350305ce 100644
--- a/content/v2/csidriver/installation/test/powermax.md
+++ b/content/v2/csidriver/installation/test/powermax.md
@@ -40,6 +40,7 @@ This script does the following:
- After that, it uses that PVC as the data source to create a new PVC and mounts it on the same container. It checks if the file that existed in the source PVC also exists in the new PVC, calculates its checksum, and compares it to the checksum previously calculated.
- Finally, it cleans up all the resources that are created as part of the test.
+> This is not supported for replicated volumes.
#### Snapshot test
@@ -71,6 +72,8 @@ Use this procedure to perform a volume expansion test.
- After that, it calculates the checksum of the written data, expands the PVC, and then recalculates the checksum
- Cleans up all the resources that were created as part of the test
+>Note: This is not applicable for replicated volumes.
+
### Setting Application Prefix
Application prefix is the name of the application that can be used to group the PowerMax volumes. We can use it while naming storage group. To set the application prefix for PowerMax, please refer to the sample storage class https://github.com/dell/csi-powermax/blob/main/samples/storageclass/powermax.yaml.
diff --git a/content/v2/csidriver/release/powermax.md b/content/v2/csidriver/release/powermax.md
index 52c67cf950..5739dd04ee 100644
--- a/content/v2/csidriver/release/powermax.md
+++ b/content/v2/csidriver/release/powermax.md
@@ -25,3 +25,4 @@ There are no fixed issues in this release.
### Note:
- Support for Kubernetes alpha features like Volume Health Monitoring and RWOP (ReadWriteOncePod) access mode introduced in the release will not be available in Openshift environment as Openshift doesn't support enabling of alpha features for Production Grade clusters.
+- Expansion of volumes and cloning of volumes are not supported for replicated volumes.
diff --git a/content/v2/csidriver/upgradation/drivers/operator.md b/content/v2/csidriver/upgradation/drivers/operator.md
index 0cfbc9355e..d3f9b22a5b 100644
--- a/content/v2/csidriver/upgradation/drivers/operator.md
+++ b/content/v2/csidriver/upgradation/drivers/operator.md
@@ -13,10 +13,9 @@ Dell CSI Operator can be upgraded based on the supported platforms in one of the
### Using Installation Script
-1. Clone the [Dell CSI Operator repository](https://github.com/dell/dell-csi-operator).
+1. Clone and checkout the required dell-csi-operator version using `git clone -b v1.7.0 https://github.com/dell/dell-csi-operator.git`.
2. cd dell-csi-operator
-3. git checkout dell-csi-operator-'your-version'
-4. Execute `bash scripts/install.sh --upgrade` . This command will install the latest version of the operator.
+3. Execute `bash scripts/install.sh --upgrade` . This command will install the latest version of the operator.
>Note: Dell CSI Operator version 1.4.0 and higher would install to the 'dell-csi-operator' namespace by default.
### Using OLM
diff --git a/content/v2/deployment/csmoperator/drivers/powerscale.md b/content/v2/deployment/csmoperator/drivers/powerscale.md
index 951ece9dd0..4471f1d1e6 100644
--- a/content/v2/deployment/csmoperator/drivers/powerscale.md
+++ b/content/v2/deployment/csmoperator/drivers/powerscale.md
@@ -18,7 +18,8 @@ Note that the deployment of the driver using the operator does not use any Helm
User can query for all Dell CSI drivers using the following command:
`kubectl get csm --all-namespaces`
-### Install Driver
+
+### Prerequisite
1. Create namespace.
Execute `kubectl create namespace test-isilon` to create the test-isilon namespace (if not already present). Note that the namespace can be any user-defined name, in this example, we assume that the namespace is 'test-isilon'.
@@ -104,10 +105,14 @@ User can query for all Dell CSI drivers using the following command:
```
Execute command: ```kubectl create -f empty-secret.yaml```
-4. Create a CR (Custom Resource) for PowerScale using the sample files provided
+### Install Driver
+
+1. Follow all the [prerequisites](#prerequisite) above
+
+2. Create a CR (Custom Resource) for PowerScale using the sample files provided
[here](https://github.com/dell/csm-operator/tree/master/samples). This file can be modified to use custom parameters if needed.
-5. Users should configure the parameters in CR. The following table lists the primary configurable parameters of the PowerScale driver and their default values:
+3. Users should configure the parameters in CR. The following table lists the primary configurable parameters of the PowerScale driver and their default values:
| Parameter | Description | Required | Default |
| --------- | ----------- | -------- |-------- |
@@ -128,11 +133,11 @@ User can query for all Dell CSI drivers using the following command:
| X_CSI_MAX_VOLUMES_PER_NODE | Specify the default value for the maximum number of volumes that the controller can publish to the node | Yes | 0 |
| X_CSI_MODE | Driver starting mode | No | node |
-6. Execute the following command to create PowerScale custom resource:
+4. Execute the following command to create PowerScale custom resource:
```kubectl create -f ``` .
This command will deploy the CSI-PowerScale driver in the namespace specified in the input YAML file.
-7. [Verify the CSI Driver installation](../../#verifying-the-driver-installation)
+5. [Verify the CSI Driver installation](../../#verifying-the-driver-installation)
**Note** :
1. "Kubelet config dir path" is not yet configurable in case of Operator based driver installation.
diff --git a/content/v2/deployment/csmoperator/modules/_index.md b/content/v2/deployment/csmoperator/modules/_index.md
index 4b79544a51..4a76e7d868 100644
--- a/content/v2/deployment/csmoperator/modules/_index.md
+++ b/content/v2/deployment/csmoperator/modules/_index.md
@@ -3,4 +3,11 @@ title: "CSM Modules"
linkTitle: "CSM Modules"
description: Installation of Dell CSM Modules using Dell CSM Operator
weight: 2
----
\ No newline at end of file
+---
+
+The CSM Operator can optionally enable modules that are supported by the specific Dell CSI driver. By default, the modules are disabled but they can be enabled by setting any pre-requisite configuration options for the given module and setting the enabled flag to true in the custom resource.
+The steps include:
+
+1. Deploy the Dell CSM Operator (if it is not already deployed). Please follow the instructions available [here](../../#installation).
+2. Configure any pre-requisite for the desired module(s). See the specific module below for more information
+3. Follow the instructions available [here](../drivers/powerscale.md/#install-driver)) to install the Dell CSI Driver via the CSM Operator. The module section in the ContainerStorageModule CR should be updated to enable the desired module(s). There are [sample manifests](https://github.com/dell/csm-operator/tree/main/samples) provided which can be edited to do an easy installation of the driver along with the module.
\ No newline at end of file
diff --git a/content/v2/deployment/csmoperator/modules/authorization.md b/content/v2/deployment/csmoperator/modules/authorization.md
index 3e9307bab8..4d1e2ca19b 100644
--- a/content/v2/deployment/csmoperator/modules/authorization.md
+++ b/content/v2/deployment/csmoperator/modules/authorization.md
@@ -2,19 +2,11 @@
title: Authorization
linkTitle: "Authorization"
description: >
- Installing Authorization via Dell CSM Operator
+ Pre-requisite for Installing Authorization via Dell CSM Operator
---
-## Installing Authorization via Dell CSM Operator
+The CSM Authorization module for supported Dell CSI Drivers can be installed via the Dell CSM Operator. Please note, Dell CSM operator currently ONLY supports deploying CSM Authorization sidecar/container.
-The Authorization module for supported Dell CSI Drivers can be installed via the Dell CSM Operator.
+## Pre-requisite
-To deploy the Dell CSM Operator, follow the instructions available [here](../../#installation).
-
-There are [sample manifests](https://github.com/dell/csm-operator/blob/main/samples/storage_csm_powerscale.yaml) provided which can be edited to do an easy installation of the driver along with the module.
-
-### Install Authorization
-
-1. Create the required Secrets as documented in the [Helm chart procedure](../../../../authorization/deployment/#configuring-a-dell-csi-driver).
-
-2. Follow the instructions available [here](../../drivers/powerscale/#install-driver) to install the Dell CSI Driver via the CSM Operator. The module section in the ContainerStorageModule CR should be updated to enable Authorization.
\ No newline at end of file
+Follow the instructions available in CSM Authorization for [Configuring a Dell CSI Driver with CSM for Authorization](../../../authorization/deployment/_index.md/#configuring-a-dell-csi-driver).
\ No newline at end of file
diff --git a/content/v2/deployment/csmoperator/modules/replication.md b/content/v2/deployment/csmoperator/modules/replication.md
new file mode 100644
index 0000000000..cba958854a
--- /dev/null
+++ b/content/v2/deployment/csmoperator/modules/replication.md
@@ -0,0 +1,27 @@
+---
+title: Replication
+linkTitle: "Replication"
+description: >
+ Pre-requisite for Installing Replication via Dell CSM Operator
+---
+
+The CSM Replication module for supported Dell CSI Drivers can be installed via the Dell CSM Operator. Dell CSM Operator will deploy CSM Replication sidecar and the complemental CSM Replication controller manager.
+
+## Prerequisite
+
+To use Replication, you need at least two clusters:
+
+- a source cluster which is the main cluster
+- one or more target clusters which will serve as diaster recovery clusters for the main cluster
+
+To configure all the clusters, follow the steps below:
+
+1. On your main cluster, follow the instructions available in CSM Replication for [Installation using repctl](../../../replication/deployment/install-repctl.md). NOTE: On step 4 of the link above, you MUST use the command below to automatically package all clusters' `.kube` config as a secret:
+
+```shell
+ ./repctl cluster inject
+```
+
+CSM Operator needs this admin configs instead of the service accounts’ configs to be able to properly manage the target clusters. The default service account that'll be used is the CSM Operator service account.
+
+2. On each of the target clusters, configure the prerequisites for deploying the driver via Dell CSM Operator. For example, PowerScale has the following [prerequisites for deploying PowerScale via Dell CSM Operator](../drivers/powerscale.md/#prerequisite)
\ No newline at end of file
diff --git a/content/v2/observability/deployment/_index.md b/content/v2/observability/deployment/_index.md
index 582e8d90c0..9a5d6f2566 100644
--- a/content/v2/observability/deployment/_index.md
+++ b/content/v2/observability/deployment/_index.md
@@ -30,7 +30,7 @@ The Prometheus service should be running on the same Kubernetes cluster as the C
| Supported Version | Image | Helm Chart |
| ----------------- | ----------------------- | ------------------------------------------------------------ |
-| 2.22.0 | prom/prometheus:v2.22.0 | [Prometheus Helm chart](https://github.com/prometheus-community/helm-charts/tree/main/charts/prometheus) |
+| 2.23.0 | prom/prometheus:v2.23.0 | [Prometheus Helm chart](https://github.com/prometheus-community/helm-charts/tree/main/charts/prometheus) |
**Note**: It is the user's responsibility to provide persistent storage for Prometheus if they want to preserve historical data.
@@ -65,13 +65,13 @@ Here is a sample minimal configuration for Prometheus. Please note that the conf
type: NodePort
servicePort: 9090
extraScrapeConfigs: |
- - job_name: 'karavi-metrics-powerflex'
- scrape_interval: 5s
- scheme: https
- static_configs:
- - targets: ['otel-collector:8443']
- tls_config:
- insecure_skip_verify: true
+ - job_name: 'karavi-metrics-[CSI-DRIVER]'
+ scrape_interval: 5s
+ scheme: https
+ static_configs:
+ - targets: ['otel-collector:8443']
+ tls_config:
+ insecure_skip_verify: true
```
2. If using Rancher, create a ServiceMonitor.
@@ -227,7 +227,7 @@ Below are the steps to deploy a new Grafana instance into your Kubernetes cluste
- name: Prometheus
type: prometheus
access: proxy
- url: 'http://prometheus:9090'
+ url: 'http://prometheus-server:9090'
isDefault: null
version: 1
editable: true
diff --git a/content/v2/replication/_index.md b/content/v2/replication/_index.md
index fe7de3d6dd..cae6e7d45d 100644
--- a/content/v2/replication/_index.md
+++ b/content/v2/replication/_index.md
@@ -16,32 +16,32 @@ applications in case of both planned and unplanned migration.
CSM for Replication provides the following capabilities:
{{}}
-| Capability | PowerScale | Unity | PowerStore | PowerFlex | PowerMax |
-| - | :-: | :-: | :-: | :-: | :-: |
-| Replicate data using native storage array based replication | yes | no | yes | no | yes |
-| Create `PersistentVolume` objects in the cluster representing the replicated volume | yes | no | yes | no | yes |
-| Create `DellCSIReplicationGroup` objects in the cluster | yes | no | yes | no | yes |
-| Failover & Reprotect applications using the replicated volumes | yes | no | yes | no | yes |
-| Provides a command line utility - [repctl](tools) for configuring & managing replication related resources across multiple clusters | yes | no | yes | no | yes |
+| Capability | PowerMax | PowerStore | PowerScale | PowerFlex | Unity |
+| ----------------------------------------------------------------------------------- | :------: | :--------: | :--------: | :-------: | :---: |
+| Replicate data using native storage array based replication | yes | yes | yes | no | no |
+| Create `PersistentVolume` objects in the cluster representing the replicated volume | yes | yes | yes | no | no |
+| Create `DellCSIReplicationGroup` objects in the cluster | yes | yes | yes | no | no |
+| Failover & Reprotect applications using the replicated volumes | yes | yes | yes | no | no |
+| Provides a command line utility - [repctl](tools) for configuring & managing replication related resources across multiple clusters | yes | yes | yes | no | no |
{{
}}
## Supported Operating Systems/Container Orchestrator Platforms
{{}}
-| COP/OS | PowerMax | PowerStore | PowerScale |
-|-|-|-|-|
-| Kubernetes | 1.21, 1.22, 1.23 | 1.21, 1.22, 1.23 | 1.21, 1.22, 1.23|
-| Red Hat OpenShift | 4.8, 4.9 | 4.8, 4.9 | 4.8, 4.9 |
-| RHEL | 7.x, 8.x | 7.x, 8.x | 7.x, 8.x |
-| CentOS | 7.8, 7.9 | 7.8, 7.9 | 7.8, 7.9 |
-| Ubuntu | 20.04 | 20.04 | 20.04 |
-| SLES | 15SP2 | 15SP2 | 15SP2 |
+| COP/OS | PowerMax | PowerStore | PowerScale |
+|---------------|------------------|------------------|------------|
+| Kubernetes | 1.21, 1.22, 1.23 | 1.21, 1.22, 1.23 | 1.21, 1.22, 1.23 |
+| Red Hat OpenShift | 4.8, 4.9 | 4.8, 4.9 | 4.8, 4.9 |
+| RHEL | 7.x, 8.x | 7.x, 8.x | 7.x, 8.x |
+| CentOS | 7.8, 7.9 | 7.8, 7.9 | 7.8, 7.9 |
+| Ubuntu | 20.04 | 20.04 | 20.04 |
+| SLES | 15SP2 | 15SP2 | 15SP2 |
{{
}}
## Supported Storage Platforms
{{}}
-| | PowerMax | PowerStore | PowerScale |
+| | PowerMax | PowerStore | PowerScale |
|---------------|:-------------------:|:----------------:|:----------------:|
| Storage Array | 5978.479.479, 5978.711.711, Unisphere 9.2 | 1.0.x, 2.0.x, 2.1.x | OneFS 8.1, 8.2, 9.0, 9.1, 9.2, 9.3 |
{{
}}
@@ -50,11 +50,11 @@ CSM for Replication provides the following capabilities:
CSM for Replication supports the following CSI drivers and versions.
{{}}
-| Storage Array | CSI Driver | Supported Versions |
-| ------------- | ---------- | ------------------ |
-| CSI Driver for Dell PowerMax | [csi-powermax](https://github.com/dell/csi-powermax) | v2.0, v2.1, v2.2 |
-| CSI Driver for Dell PowerStore | [csi-powerstore](https://github.com/dell/csi-powerstore) | v2.0, v2.1, v2.2 |
-| CSI Driver for Dell PowerScale | [csi-powerscale](https://github.com/dell/csi-powerscale) | v2.2 |
+| Storage Array | CSI Driver | Supported Versions |
+| ------------------------------ | -------------------------------------------------------- | ------------------ |
+| CSI Driver for Dell PowerMax | [csi-powermax](https://github.com/dell/csi-powermax) | v2.0, v2.1, v2.2 |
+| CSI Driver for Dell PowerStore | [csi-powerstore](https://github.com/dell/csi-powerstore) | v2.0, v2.1, v2.2 |
+| CSI Driver for Dell PowerScale | [csi-powerscale](https://github.com/dell/csi-powerscale) | v2.2 |
{{
}}
## Details
@@ -80,27 +80,23 @@ the objects still exist in pairs.
CSM for Replication provides the following capabilities:
{{}}
-| Capability | PowerMax | PowerStore | PowerScale | PowerFlex | Unity |
-| ---------| -------- | -------- | -------- | -------- | -------- |
-| Asynchronous replication of PVs accross K8s clusters | yes | yes | yes | no | no |
-| Synchronous replication of PVs accross K8s clusters | yes | no | no | no | no |
-| Single cluster (stretched) mode replication | yes | yes | yes | no | no |
-| Replication actions (failover, reprotect) | yes | yes | yes | no | no |
+| Capability | PowerMax | PowerStore | PowerScale | PowerFlex | Unity |
+| ----------------------------------------------------------------| -------- | ---------- | ---------- | --------- | ----- |
+| Asynchronous replication of PVs accross or single K8s clusters | yes | yes (block)| yes | no | no |
+| Synchronous replication of PVs accross or single K8s clusters | yes | no | no | no | no |
+| Metro replication single (stretched) cluster | yes | no | no | no | no |
+| Replication actions (failover, reprotect) | yes | yes | yes | no | no |
{{
}}
### Supported Platforms
The following matrix provides a list of all supported versions for each Dell Storage product.
-| Platforms | PowerMax | PowerStore | PowerScale |
-| -------- | --------- | ---------- | ---------- |
+| Platforms | PowerMax | PowerStore | PowerScale |
+| ---------- | ----------------- | ---------------- | ---------------- |
| Kubernetes | 1.21, 1.22, 1.23 | 1.21, 1.22, 1.23 | 1.21, 1.22, 1.23 |
-| CSI Driver | 2.x | 2.x | 2.2+ |
-
-| Platforms | PowerMax | PowerStore | PowerScale |
-| -------- | --------- | ---------- | ---------- |
-| RedHat Openshift |4.8, 4.9 | 4.8, 4.9 | 4.8, 4.9 |
-| CSI Driver | 2.2+ | 2.x | 2.2+ |
+| RedHat Openshift |4.8, 4.9 | 4.8, 4.9 | 4.8, 4.9 |
+| CSI Driver | 2.x | 2.x | 2.2+ |
For compatibility with storage arrays please refer to corresponding [CSI drivers](../csidriver/#features-and-capabilities)
diff --git a/content/v3/FAQ/_index.md b/content/v3/FAQ/_index.md
index b7584f0534..39ffd7d493 100644
--- a/content/v3/FAQ/_index.md
+++ b/content/v3/FAQ/_index.md
@@ -1,21 +1,19 @@
---
title: "CSM FAQ"
linktitle: "FAQ"
-description: Frequently asked questions of Dell EMC Container Storage Modules
+description: Frequently asked questions of Dell Technologies (Dell) Container Storage Modules
weight: 2
---
- [What are Dell Container Storage Modules (CSM)? How different is it from a CSI driver?](#what-are-dell-container-storage-modules-csm-how-different-is-it-from-a-csi-driver)
- [Where do I start with Dell Container Storage Modules (CSM)?](#where-do-i-start-with-dell-container-storage-modules-csm)
-- [Is the Container Storage Module XYZ available for my array?](#is-the-container-storage-module-xyz-available-for-my-array)
- [What are the prerequisites for deploying Container Storage Modules?](#what-are-the-prerequisites-for-deploying-container-storage-modules)
-- [How do I uninstall or disable a Container Storage Module?](#how-do-i-uninstall-or-a-disable-a-module)
+- [How do I uninstall or disable a module?](#how-do-i-uninstall-or-disable-a-module)
- [How do I troubleshoot Container Storage Modules?](#how-do-i-troubleshoot-container-storage-modules)
- [Can I use the CSM functionality like Prometheus collection or Authorization quotas for my non-Kubernetes storage clients?](#can-i-use-the-csm-functionality-like-prometheus-collection-or-authorization-quotas-for-my-non-kubernetes-storage-clients)
- [Should I install the module in the same namespace as the driver or another?](#should-i-install-the-module-in-the-same-namespace-as-the-driver-or-another)
- [Which Kubernetes distributions are supported?](#which-kubernetes-distributions-are-supported)
- [How do I get a list of Container Storage Modules deployed in my cluster with their versions?](#how-do-i-get-a-list-of-container-storage-modules-deployed-in-my-cluster-with-their-versions)
-- [Does the CSM Installer provide full Container Storage Modules functionality for all products?](#does-the-csm-installer-provide-full-container-storage-modules-functionality-for-all-products)
- [Do all Container Storage Modules need to be the same version, or can I mix and match?](#do-all-container-storage-modules-need-to-be-the-same-version-or-can-i-mix-and-match)
- [Can I run Container Storage Modules in a production environment?](#can-i-run-container-storage-modules-in-a-production-environment)
- [Is Dell Container Storage Modules (CSM) supported by Dell Technologies?](#is-dell-container-storage-modules-csm-supported-by-dell-technologies)
@@ -30,53 +28,42 @@ The main goal with CSM modules is to expose storage array enterprise features di
### Where do I start with Dell Container Storage Modules (CSM)?
The umbrella repository for every Dell Container Storage Module is: [https://github.com/dell/csm](https://github.com/dell/csm).
-### Is the Container Storage Module XYZ available for my array?
-Please see module and the respectice CSI driver version available for each array:
-
-| CSM Module | CSI PowerFlex v2.1 | CSI PowerScale v2.1 | CSI PowerStore v2.1 | CSI PowerMax v2.1 | CSI Unity XT v2.1 |
-| ----------------- | -------------- | --------------- | --------------- | ------------- | --------------- |
-| Authorization v1.1| ✔️ | ✔️ | ❌ | ✔️ | ❌ |
-| Observability v1.0| ✔️ | ❌ | ✔️ | ❌ | ❌ |
-| Replication v1.1| ❌ | ❌ | ✔️ | ✔️ | ❌ |
-| Resilency v1.0| ✔️ | ❌ | ❌ | ❌ | ✔️ |
-| CSM Installer v1.0| ✔️ | ✔️ | ✔️ | ✔️ | ✔️ |
-
### What are the prerequisites for deploying Container Storage Modules?
Prerequisites can be found on the respective module deployment pages:
-- [Dell EMC Container Storage Module for Observability Deployment](../observability/deployment/#prerequisites)
-- [Dell EMC Container Storage Module for Authorization Deployment](../authorization/deployment/#prerequisites)
-- [Dell EMC Container Storage Module for Resiliency Deployment](../resiliency/deployment/)
-- [Dell EMC Container Storage Module for Replication Deployment](../replication/deployment/installation/#before-you-begin)
+- [Dell Container Storage Module for Observability Deployment](../observability/deployment/#prerequisites)
+- [Dell Container Storage Module for Authorization Deployment](../authorization/deployment/#prerequisites)
+- [Dell Container Storage Module for Resiliency Deployment](../resiliency/deployment/)
+- [Dell Container Storage Module for Replication Deployment](../replication/deployment/installation/#before-you-begin)
-Prerequisites for deploying the Dell EMC CSI drivers can be found here:
-- [Dell EMC CSI Drivers Deployment](../csidriver/installation/)
+Prerequisites for deploying the Dell CSI drivers can be found here:
+- [Dell CSI Drivers Deployment](../csidriver/installation/)
-### How do I uninstall or a disable a module?
-- [Dell EMC Container Storage Module for Authorization](../authorization/uninstallation/)
-- [Dell EMC Container Storage Module for Observability](../observability/uninstall/)
-- [Dell EMC Container Storage Module for Resiliency](../resiliency/uninstallation/)
+### How do I uninstall or disable a module?
+- [Dell Container Storage Module for Authorization](../authorization/uninstallation/)
+- [Dell Container Storage Module for Observability](../observability/uninstall/)
+- [Dell Container Storage Module for Resiliency](../resiliency/uninstallation/)
### How do I troubleshoot Container Storage Modules?
-- [Dell EMC CSI Drivers](../csidriver/troubleshooting/)
-- [Dell EMC Container Storage Module for Authorization](../authorization/troubleshooting/)
-- [Dell EMC Container Storage Module for Observability](../observability/troubleshooting/)
-- [Dell EMC Container Storage Module for Replication](../replication/troubleshooting/)
-- [Dell EMC Container Storage Module for Resiliency](../resiliency/troubleshooting/)
+- [Dell CSI Drivers](../csidriver/troubleshooting/)
+- [Dell Container Storage Module for Authorization](../authorization/troubleshooting/)
+- [Dell Container Storage Module for Observability](../observability/troubleshooting/)
+- [Dell Container Storage Module for Replication](../replication/troubleshooting/)
+- [Dell Container Storage Module for Resiliency](../resiliency/troubleshooting/)
### Can I use the CSM functionality like Prometheus collection or Authorization quotas for my non-Kubernetes storage clients?
-No, all the modules have been designed to work inside Kubernetes with Dell EMC CSI drivers.
+No, all the modules have been designed to work inside Kubernetes with Dell CSI drivers.
### Should I install the module in the same namespace as the driver or another?
-It is recommended to install CSM for Observability in a namespace separate from the Dell EMC CSI drivers because it works across multiple drivers. All other modules either run as standalone or are injected into the Dell EMC CSI driver as a sidecar.
+It is recommended to install CSM for Observability in a namespace separate from the Dell CSI drivers because it works across multiple drivers. All other modules either run as standalone or with the Dell CSI driver as a sidecar.
### Which Kubernetes distributions are supported?
The supported Kubernetes distributions for Container Storage Modules are documented:
-- [Dell EMC Container Storage Module for Authorization](../authorization/#supported-operating-systemscontainer-orchestrator-platforms)
-- [Dell EMC Container Storage Module for Observability](../observability/#supported-operating-systemscontainer-orchestrator-platforms)
-- [Dell EMC Container Storage Module for Replication](../replication/#supported-operating-systemscontainer-orchestrator-platforms)
-- [Dell EMC Container Storage Module for Resiliency](../resiliency/#supported-operating-systemscontainer-orchestrator-platforms)
+- [Dell Container Storage Module for Authorization](../authorization/#supported-operating-systemscontainer-orchestrator-platforms)
+- [Dell Container Storage Module for Observability](../observability/#supported-operating-systemscontainer-orchestrator-platforms)
+- [Dell Container Storage Module for Replication](../replication/#supported-operating-systemscontainer-orchestrator-platforms)
+- [Dell Container Storage Module for Resiliency](../resiliency/#supported-operating-systemscontainer-orchestrator-platforms)
-The supported distros for the Dell EMC CSI Drivers are located [here](../csidriver/#supported-operating-systemscontainer-orchestrator-platforms).
+The supported distros for the Dell CSI Drivers are located [here](../csidriver/#supported-operating-systemscontainer-orchestrator-platforms).
### How do I get a list of Container Storage Modules deployed in my cluster with their versions?
The easiest way to find the module version is to check the image tag for the module. For all the namespaces you can execute the following:
@@ -88,18 +75,13 @@ Or if you know the namespace:
kubectl get deployment,daemonset -o wide -n {{namespace}}
```
-### Does the CSM Installer provide full Container Storage Modules functionality for all products?
-The CSM Installer supports the installation of all the Container Storage Modules and Dell EMC CSI drivers.
-
### Do all Container Storage Modules need to be the same version, or can I mix and match?
It is advised to comply with the support matrices (links below) and not deviate from it with mixed versions.
-- [Dell EMC Container Storage Module for Authorization](../authorization/#supported-operating-systemscontainer-orchestrator-platforms)
-- [Dell EMC Container Storage Module for Observability](../observability/#supported-operating-systemscontainer-orchestrator-platforms)
-- [Dell EMC Container Storage Module for Replication](../replication/#supported-operating-systemscontainer-orchestrator-platforms)
-- [Dell EMC Container Storage Module for Resiliency](../resiliency/#supported-operating-systemscontainer-orchestrator-platforms)
-- [Dell EMC CSI Drivers](../csidriver/#supported-operating-systemscontainer-orchestrator-platforms).
-
-The CSM installer module will help to stay aligned with compatible versions during the first install and future upgrades.
+- [Dell Container Storage Module for Authorization](../authorization/#supported-operating-systemscontainer-orchestrator-platforms)
+- [Dell Container Storage Module for Observability](../observability/#supported-operating-systemscontainer-orchestrator-platforms)
+- [Dell Container Storage Module for Replication](../replication/#supported-operating-systemscontainer-orchestrator-platforms)
+- [Dell Container Storage Module for Resiliency](../resiliency/#supported-operating-systemscontainer-orchestrator-platforms)
+- [Dell CSI Drivers](../csidriver/#supported-operating-systemscontainer-orchestrator-platforms).
### Can I run Container Storage Modules in a production environment?
As of CSM 1.0, the Container Storage Modules are GA and ready for production systems.
@@ -115,4 +97,4 @@ Yes!
All Container Storage Modules are released as open-source projects under Apache-2.0 License. You are free to contribute directly following the [contribution guidelines](https://github.com/dell/csm/blob/main/docs/CONTRIBUTING.md), fork the projects, modify them, and of course share feedback or open tickets ;-)
### What is coming next?
-This is just the beginning of the journey for Dell Container Storage Modules, and there is a full roadmap with more to come, which you can check under the [GithHub Milestones](https://github.com/dell/csm/milestones) page.
+This is just the beginning of the journey for Dell Container Storage Modules, and there is a full roadmap with more to come, which you can check under the [GitHub Milestones](https://github.com/dell/csm/milestones) page.
diff --git a/content/v3/_index.md b/content/v3/_index.md
index 18b7ddfaaa..68f876afee 100644
--- a/content/v3/_index.md
+++ b/content/v3/_index.md
@@ -7,7 +7,7 @@ linkTitle: "Documentation"
This document version is no longer actively maintained. The site that you are currently viewing is an archived snapshot. For up-to-date documentation, see the [latest version](/csm-docs/)
{{% /pageinfo %}}
-The Dell Container Storage Modules (CSM) enables simple and consistent integration and automation experiences, extending enterprise storage capabilities to Kubernetes for cloud-native stateful applications. It reduces management complexity so developers can independently consume enterprise storage with ease and automate daily operations such as provisioning, snapshotting, replication, observability, authorization and, resiliency.
+The Dell Technologies (Dell) Container Storage Modules (CSM) enables simple and consistent integration and automation experiences, extending enterprise storage capabilities to Kubernetes for cloud-native stateful applications. It reduces management complexity so developers can independently consume enterprise storage with ease and automate daily operations such as provisioning, snapshotting, replication, observability, authorization and, resiliency.
@@ -15,16 +15,25 @@ CSM is made up of multiple components including modules (enterprise capabilities
-## CSM Supported Modules and Dell EMC CSI Drivers
+## CSM Supported Modules and Dell CSI Drivers
-| Modules/Drivers | CSM 1.1 | [CSM 1.0](../v1/) | [Previous](../v2/) | [Older](../v3) |
+| Modules/Drivers | CSM 1.2 | [CSM 1.1](../v1/) | [CSM 1.0.1](../v1/) | [CSM 1.0](../v2/) |
| - | :-: | :-: | :-: | :-: |
-| Authorization | 1.1 | 1.0 | - | - |
-| Observability | 1.0 | 1.0 | - | - |
-| Replication | 1.1 | 1.0 | - | - |
-| Resiliency | 1.0 | 1.0 | - | - |
-| CSI Driver for PowerScale | v2.1 | v2.0 | v1.6 | v1.5 |
-| CSI Driver for Unity | v2.1 | v2.0 | v1.6 | v1.5 |
-| CSI Driver for PowerStore | v2.1 | v2.0 | v1.4 | v1.3 |
-| CSI Driver for PowerFlex | v2.1 | v2.0 | v1.5 | v1.4 |
-| CSI Driver for PowerMax | v2.1 | v2.0 | v1.7 | v1.6 |
+| [Authorization](https://hub.docker.com/r/dellemc/csm-authorization-sidecar) | 1.2 | 1.1 | 1.0 | 1.0 |
+| [Observability](https://hub.docker.com/r/dellemc/csm-topology) | 1.1 | 1.0.1 | 1.0.1 | 1.0 |
+| [Replication](https://hub.docker.com/r/dellemc/dell-csi-replicator) | 1.2 | 1.1 | 1.0 | 1.0 |
+| [Resiliency](https://hub.docker.com/r/dellemc/podmon) | 1.1 | 1.0.1 | 1.0.1 | 1.0 |
+| [CSI Driver for PowerScale](https://hub.docker.com/r/dellemc/csi-isilon/tags) | v2.2 | v2.1 | v2.0 | v2.0 |
+| [CSI Driver for Unity](https://hub.docker.com/r/dellemc/csi-unity/tags) | v2.2 | v2.1 | v2.0 | v2.0 |
+| [CSI Driver for PowerStore](https://hub.docker.com/r/dellemc/csi-powerstore/tags) | v2.2 | v2.1 | v2.0 | v2.0 |
+| [CSI Driver for PowerFlex](https://hub.docker.com/r/dellemc/csi-vxflexos/tags) | v2.2 | v2.1 | v2.0 | v2.0 |
+| [CSI Driver for PowerMax](https://hub.docker.com/r/dellemc/csi-powermax/tags) | v2.2 | v2.1 | v2.0 | v2.0 |
+
+## CSM Modules Support Matrix for Dell CSI Drivers
+
+| CSM Module | CSI PowerFlex v2.2 | CSI PowerScale v2.2 | CSI PowerStore v2.2 | CSI PowerMax v2.2 | CSI Unity XT v2.2 |
+| ----------------- | -------------- | --------------- | --------------- | ------------- | --------------- |
+| Authorization v1.2| ✔️ | ✔️ | ❌ | ✔️ | ❌ |
+| Observability v1.1| ✔️ | ❌ | ✔️ | ❌ | ❌ |
+| Replication v1.2| ❌ | ✔️ | ✔️ | ✔️ | ❌ |
+| Resilency v1.1| ✔️ | ❌ | ❌ | ❌ | ✔️ |
\ No newline at end of file
diff --git a/content/v3/authorization/_index.md b/content/v3/authorization/_index.md
index 329e6065a1..0310e936d6 100644
--- a/content/v3/authorization/_index.md
+++ b/content/v3/authorization/_index.md
@@ -3,18 +3,18 @@ title: "Authorization"
linkTitle: "Authorization"
weight: 4
Description: >
- Dell EMC Container Storage Modules (CSM) for Authorization
+ Dell Technologies (Dell) Container Storage Modules (CSM) for Authorization
---
-[Container Storage Modules](https://github.com/dell/csm) (CSM) for Authorization is part of the open-source suite of Kubernetes storage enablers for Dell EMC products.
+[Container Storage Modules](https://github.com/dell/csm) (CSM) for Authorization is part of the open-source suite of Kubernetes storage enablers for Dell products.
-CSM for Authorization provides storage and Kubernetes administrators the ability to apply RBAC for Dell EMC CSI Drivers. It does this by deploying a proxy between the CSI driver and the storage system to enforce role-based access and usage rules.
+CSM for Authorization provides storage and Kubernetes administrators the ability to apply RBAC for Dell CSI Drivers. It does this by deploying a proxy between the CSI driver and the storage system to enforce role-based access and usage rules.
Storage administrators of compatible storage platforms will be able to apply quota and RBAC rules that instantly and automatically restrict cluster tenants usage of storage resources. Users of storage through CSM for Authorization do not need to have storage admin root credentials to access the storage system.
Kubernetes administrators will have an interface to create, delete, and manage roles/groups that storage rules may be applied. Administrators and/or users may then generate authentication tokens that may be used by tenants to use storage with proper access policies being automatically enforced.
-The following diagram shows a high-level overview of CSM for Authorization with a `tenant-app` that is using a CSI driver to perform storage operations through the CSM for Authorization `proxy-server` to access the a Dell EMC storage system. All requests from the CSI driver will contain the token for the given tenant that was granted by the Storage Administrator.
+The following diagram shows a high-level overview of CSM for Authorization with a `tenant-app` that is using a CSI driver to perform storage operations through the CSM for Authorization `proxy-server` to access the a Dell storage system. All requests from the CSI driver will contain the token for the given tenant that was granted by the Storage Administrator.
![CSM for Authorization](./karavi-authorization-example.png "CSM for Authorization")
@@ -27,13 +27,13 @@ The following diagram shows a high-level overview of CSM for Authorization with
| Ability to shield storage credentials from Kubernetes administrators ensuring credentials are only handled by storage admins | Yes | Yes | Yes | No | No |
{{
}}
-__NOTE:__ PowerScale OneFS implements its own form of Role-Based Access Control (RBAC). CSM for Authorization does not enforce any role-based restrictions for PowerScale. To configure RBAC for PowerScale, refer to the PowerScale OneFS [documentation](https://www.dell.com/support/home/en-us/product-support/product/isilon-onefs/docs).
+**NOTE:** PowerScale OneFS implements its own form of Role-Based Access Control (RBAC). CSM for Authorization does not enforce any role-based restrictions for PowerScale. To configure RBAC for PowerScale, refer to the PowerScale OneFS [documentation](https://www.dell.com/support/home/en-us/product-support/product/isilon-onefs/docs).
## Supported Operating Systems/Container Orchestrator Platforms
{{}}
| COP/OS | Supported Versions |
|-|-|
-| Kubernetes | 1.20, 1.21, 1.22 |
+| Kubernetes | 1.21, 1.22, 1.23 |
| Red Hat OpenShift | 4.8, 4.9|
| RHEL | 7.x, 8.x |
| CentOS | 7.8, 7.9 |
@@ -44,7 +44,7 @@ __NOTE:__ PowerScale OneFS implements its own form of Role-Based Access Control
{{}}
| | PowerMax | PowerFlex | PowerScale |
|---------------|:----------------:|:-------------------:|:----------------:|
-| Storage Array |5978.479.479, 5978.669.669, 5978.711.711, Unisphere 9.2| 3.5.x, 3.6.x | OneFS 8.1, 8.2, 9.0, 9.1, 9.2 |
+| Storage Array |5978.479.479, 5978.711.711, Unisphere 9.2| 3.5.x, 3.6.x | OneFS 8.1, 8.2, 9.0, 9.1, 9.2, 9.3 |
{{
}}
## Supported CSI Drivers
@@ -53,12 +53,12 @@ CSM for Authorization supports the following CSI drivers and versions.
{{}}
| Storage Array | CSI Driver | Supported Versions |
| ------------- | ---------- | ------------------ |
-| CSI Driver for Dell EMC PowerFlex | [csi-powerflex](https://github.com/dell/csi-powerflex) | v2.0,v2.1 |
-| CSI Driver for Dell EMC PowerMax | [csi-powermax](https://github.com/dell/csi-powermax) | v2.0,v2.1 |
-| CSI Driver for Dell EMC PowerScale | [csi-powerscale](https://github.com/dell/csi-powerscale) | v2.0,v2.1 |
+| CSI Driver for Dell PowerFlex | [csi-powerflex](https://github.com/dell/csi-powerflex) | v2.0, v2.1, v2.2 |
+| CSI Driver for Dell PowerMax | [csi-powermax](https://github.com/dell/csi-powermax) | v2.0, v2.1 ,v2.2 |
+| CSI Driver for Dell PowerScale | [csi-powerscale](https://github.com/dell/csi-powerscale) | v2.0, v2.1, v2.2 |
{{
}}
-__Note:__ If the deployed CSI driver has a number of controller pods equal to the number of schedulable nodes in your cluster, CSM for Authorization may not be able to inject properly into the driver's controller pod.
+**NOTE:** If the deployed CSI driver has a number of controller pods equal to the number of schedulable nodes in your cluster, CSM for Authorization may not be able to inject properly into the driver's controller pod.
To resolve this, please refer to our [troubleshooting guide](./troubleshooting) on the topic.
## Authorization Components Support Matrix
@@ -68,6 +68,7 @@ CSM for Authorization consists of 2 components - the Authorization sidecar and t
| Authorization Sidecar Image Tag | Authorization Proxy Server Version |
| ------------------------------- | ---------------------------------- |
| dellemc/csm-authorization-sidecar:v1.0.0 | v1.0.0, v1.1.0 |
+| dellemc/csm-authorization-sidecar:v1.2.0 | v1.1.0, v1.2.0 |
{{
}}
## Roles and Responsibilities
@@ -99,4 +100,4 @@ Tenants of CSM for Authorization can use the token provided by the Storage Admin
4) Tenant Admin inputs the Token into their Kubernetes cluster as a Secret.
5) Tenant Admin updates CSI driver with CSM Authorization sidecar module.
-![CSM for Authorization Workflow](./design2.png "CSM for Authorization Workflow")
\ No newline at end of file
+![CSM for Authorization Workflow](./design2.png "CSM for Authorization Workflow")
diff --git a/content/v3/authorization/cli.md b/content/v3/authorization/cli.md
index eedaf0957d..f1ef1bb5aa 100644
--- a/content/v3/authorization/cli.md
+++ b/content/v3/authorization/cli.md
@@ -3,7 +3,7 @@ title: CLI
linktitle: CLI
weight: 4
description: >
- Dell EMC Container Storage Modules (CSM) for Authorization CLI
+ Dell Technologies (Dell) Container Storage Modules (CSM) for Authorization CLI
---
karavictl is a command-line interface (CLI) used to interact with and manage your Container Storage Modules (CSM) Authorization deployment.
@@ -15,7 +15,6 @@ If you feel that something is unclear or missing in this document, please open u
| - | - |
| [karavictl](#karavictl) | karavictl is used to interact with CSM Authorization Server |
| [karavictl cluster-info](#karavictl-cluster-info) | Display the state of resources within the cluster |
-| [karavictl inject](#karavictl-inject) | Inject the sidecar proxy into a CSI driver pod |
| [karavictl generate](#karavictl-generate) | Generate resources for use with CSM |
| [karavictl generate token](#karavictl-generate-token) | Generate tokens |
| [karavictl role](#karavictl-role) | Manage role |
@@ -48,7 +47,7 @@ karavictl is used to interact with CSM Authorization Server
##### Synopsis
-karavictl provides security, RBAC, and quota limits for accessing Dell EMC
+karavictl provides security, RBAC, and quota limits for accessing Dell
storage products from Kubernetes clusters
##### Options
@@ -112,60 +111,6 @@ redis-commander 1/1 1 1 59m
-### karavictl inject
-
-Inject the sidecar proxy into a CSI driver pod
-
-##### Synopsis
-
-Injects the sidecar proxy into a CSI driver pod.
-
-You can inject resources coming from stdin.
-
-```
-karavictl inject [flags]
-```
-
-##### Options
-
-```
- -h, --help help for inject
- --image-addr string Help message for image-addr
- --proxy-host string Help message for proxy-host
-```
-
-##### Options inherited from parent commands
-
-```
- --config string config file (default is $HOME/.karavictl.yaml)
-```
-
-##### Examples:
-
-Inject into an existing vxflexos CSI driver
-```
-kubectl get secrets,deployments,daemonsets -n vxflexos -o yaml \
- | karavictl inject --image-addr [IMAGE_REPO]:5000/sidecar-proxy:latest --proxy-host [PROXY_HOST_IP] \
- | kubectl apply -f -
-```
-
-##### Output
-
-```
-$ kubectl get secrets,deployments,daemonsets -n vxflexos -o yaml \
-| karavictl inject --image-addr [IMAGE_REPO]:5000/sidecar-proxy:latest --proxy-host [PROXY_HOST_IP] \
-| kubectl apply -f -
-
-secret/karavi-authorization-config created
-deployment.apps/vxflexos-controller configured
-daemonset.apps/vxflexos-node configured
-```
-
-
----
-
-
-
### karavictl generate
Generate resources for use with CSM
diff --git a/content/v3/authorization/deployment/_index.md b/content/v3/authorization/deployment/_index.md
index 8a4ab73dd2..ca15cb03da 100644
--- a/content/v3/authorization/deployment/_index.md
+++ b/content/v3/authorization/deployment/_index.md
@@ -3,12 +3,12 @@ title: Deployment
linktitle: Deployment
weight: 2
description: >
- Dell EMC Container Storage Modules (CSM) for Authorization deployment
+ Dell Technologies (Dell) Container Storage Modules (CSM) for Authorization deployment
---
This section outlines the deployment steps for Container Storage Modules (CSM) for Authorization. The deployment of CSM for Authorization is handled in 2 parts:
- Deploying the CSM for Authorization proxy server, to be controlled by storage administrators
-- Configuring one to many [supported](../../authorization#supported-csi-drivers) Dell EMC CSI drivers with CSM for Authorization
+- Configuring one to many [supported](../../authorization#supported-csi-drivers) Dell CSI drivers with CSM for Authorization
## Prerequisites
@@ -27,32 +27,31 @@ The CSM for Authorization proxy server is installed using a single binary instal
The easiest way to obtain the single binary installer RPM is directly from the [GitHub repository's releases](https://github.com/dell/karavi-authorization/releases) section.
-The single binary installer can also be built from source by cloning the [GitHub repository](https://github.com/dell/karavi-authorization) and using the following Makefile targets to build the installer:
+Alternatively, the single binary installer can be built from source by cloning the [GitHub repository](https://github.com/dell/karavi-authorization) and using the following Makefile targets to build the installer:
```
make dist build-installer rpm
```
-The `build-installer` step creates a binary at `bin/deploy` and embeds all components required for installation. The `rpm` step generates an RPM package and stores it at `deploy/rpm/x86_64/`.
+The `build-installer` step creates a binary at `karavi-authorization/bin/deploy` and embeds all components required for installation. The `rpm` step generates an RPM package and stores it at `karavi-authorization/deploy/rpm/x86_64/`.
This allows CSM for Authorization to be installed in network-restricted environments.
A Storage Administrator can execute the installer or rpm package as a root user or via `sudo`.
### Installing the RPM
-1. Before installing the rpm, some network and security configuration inputs need to be provided in json format. The json file should be created in the location `$HOME/.karavi/config.json` having the following contents:
+1. Before installing the rpm, some network and security configuration inputs need to be provided in json format. The json file should be created in the location `$HOME/.karavi/config.json` having the following contents:
```json
{
"web": {
- "sidecarproxyaddr": "docker_registry/sidecar-proxy:latest",
"jwtsigningsecret": "secret"
},
"proxy": {
"host": ":8080"
},
"zipkin": {
- "collectoruri": "http://DNS_host_name:9411/api/v2/spans",
+ "collectoruri": "http://DNS-hostname:9411/api/v2/spans",
"probability": 1
},
"certificate": {
@@ -60,30 +59,36 @@ A Storage Administrator can execute the installer or rpm package as a root user
"crtFile": "path_to_host_cert_file",
"rootCertificate": "path_to_root_CA_file"
},
- "hostName": "DNS_host_name"
+ "hostname": "DNS-hostname"
}
```
- In the above template, `DNS_host_name` refers to the hostname of the system in which the CSM for Authorization server will be installed. This hostname can be found by running the below command on the system:
+ In an instance where a secure deployment is not required, an insecure deployment is possible. Please note that self-signed certificates will be created for you using cert-manager to allow TLS encryption for communication on the CSM for Authorization proxy server. However, this is not recommended for production environments. For an insecure deployment, the json file in the location `$HOME/.karavi/config.json` only requires the following contents:
- ```
- nslookup
+ ```json
+ {
+ "hostname": "DNS-hostname"
+ }
```
-2. In order to configure secure grpc connectivity, an additional subdomain in the format `grpc.DNS_host_name` is also required. All traffic from `grpc.DNS_host_name` needs to be routed to `DNS_host_name` address, this can be configured by adding a new DNS entry for `grpc.DNS_host_name` or providing a temporary path in the `/etc/hosts` file.
+>__Note__:
+> - `DNS-hostname` refers to the hostname of the system in which the CSM for Authorization server will be installed. This hostname can be found by running `nslookup `
+> - There are a number of ways to create certificates. In a production environment, certificates are usually created and managed by an IT administrator. Otherwise, certificates can be created using OpenSSL.
->__Note__: The certificate provided in `crtFile` should be valid for both the `DNS_host_name` and the `grpc.DNS_host_name` address.
+2. In order to configure secure grpc connectivity, an additional subdomain in the format `grpc.DNS-hostname` is also required. All traffic from `grpc.DNS-hostname` needs to be routed to `DNS-hostname` address, this can be configured by adding a new DNS entry for `grpc.DNS-hostname` or providing a temporary path in the systems `/etc/hosts` file.
- For example, create the certificate config file with alternate names (to include example.com and grpc.example.com) and then create the .crt file:
+>__Note__: The certificate provided in `crtFile` should be valid for both the `DNS-hostname` and the `grpc.DNS-hostname` address.
- ```
- CN = example.com
- subjectAltName = @alt_names
- [alt_names]
- DNS.1 = grpc.example.com
+ For example, create the certificate config file with alternate names (to include DNS-hostname and grpc.DNS-hostname) and then create the .crt file:
- openssl x509 -req -in cert_request_file.csr -CA root_CA.pem -CAkey private_key_File.key -CAcreateserial -out example.com.crt -days 365 -sha256
- ```
+ ```
+ CN = DNS-hostname
+ subjectAltName = @alt_names
+ [alt_names]
+ DNS.1 = grpc.DNS-hostname.com
+
+ $ openssl x509 -req -in cert_request_file.csr -CA root_CA.pem -CAkey private_key_File.key -CAcreateserial -out DNS-hostname.com.crt -days 365 -sha256
+ ```
3. To install the rpm package on the system, run the below command:
@@ -102,6 +107,7 @@ The storage administrator must first configure the proxy server with the followi
- Bind roles to tenants
Run the following commands on the Authorization proxy server:
+>__Note__: The `--insecure` flag is only necessary if certificates were not provided in `$HOME/.karavi/config.json`.
```console
# Specify any desired name
@@ -168,6 +174,10 @@ Run the following commands on the Authorization proxy server:
After creating the role bindings, the next logical step is to generate the access token. The storage admin is responsible for generating and sending the token to the Kubernetes tenant admin.
+>__Note__:
+> - The `--insecure` flag is only necessary if certificates were not provided in `$HOME/.karavi/config.json`.
+> - This sample copies the token directly to the Kubernetes cluster master node. The requirement here is that the token must be copied and/or stored in any location accessible to the Kubernetes tenant admin.
+
```
echo === Generating token ===
karavictl generate token --tenant $TenantName --insecure --addr "grpc.${AuthorizationProxyHost}" | jq -r '.Token' > token.yaml
@@ -175,12 +185,10 @@ After creating the role bindings, the next logical step is to generate the acces
echo === Copy token to Driver Host ===
sshpass -p $DriverHostPassword scp token.yaml ${DriverHostVMUser}@{DriverHostVMIP}:/tmp/token.yaml
```
-
->__Note__: The sample above copies the token directly to the Kubernetes cluster master node. The requirement here is that the token must be copied and/or stored in any location accessible to the Kubernetes tenant admin.
### Copy the karavictl Binary to the Kubernetes Master Node
-The karavictl binary is available from the CSM for Authorization proxy server. This needs to be copied to the Kubernetes master node for Kubernetes tenant admins so the Kubernetes tenant admins can configure the Dell EMC CSI driver with CSM for Authorization.
+The karavictl binary is available from the CSM for Authorization proxy server. This needs to be copied to the Kubernetes master node for Kubernetes tenant admins so the Kubernetes tenant admins can configure the Dell CSI driver with CSM for Authorization.
```
sshpass -p dangerous scp bin/karavictl root@10.247.96.174:/tmp/karavictl
@@ -188,11 +196,11 @@ sshpass -p dangerous scp bin/karavictl root@10.247.96.174:/tmp/karavictl
>__Note__: The storage admin is responsible for copying the binary to a location accessible by the Kubernetes tenant admin.
-## Configuring a Dell EMC CSI Driver with CSM for Authorization
+## Configuring a Dell CSI Driver with CSM for Authorization
The second part of CSM for Authorization deployment is to configure one or more of the [supported](../../authorization#supported-csi-drivers) CSI drivers. This is controlled by the Kubernetes tenant admin.
-### Configuring a Dell EMC CSI Driver
+### Configuring a Dell CSI Driver
Given a setup where Kubernetes, a storage system, and the CSM for Authorization Proxy Server are deployed, follow the steps below to configure the CSI Drivers to work with the Authorization sidecar:
@@ -225,8 +233,7 @@ Create the karavi-authorization-config secret using the following command:
>__Note__:
> - Create the driver secret as you would normally except update/add the connection information for communicating with the sidecar instead of the backend storage array and scrub the username and password
> - For PowerScale, the *systemID* will be the *clusterName* of the array.
-> - The *isilon-creds* secret has a *mountEndpoint* parameter which should not be updated by the user. This parameter is updated and used when the driver has been injected with [CSM-Authorization](https://github.com/dell/karavi-authorization).
-
+> - The *isilon-creds* secret has a *mountEndpoint* parameter which must be set to the hostname or IP address of the PowerScale OneFS API server, for example, 10.0.0.1.
3. Create the proxy-server-root-certificate secret.
If running in *insecure* mode, create the secret with empty data:
@@ -270,7 +277,9 @@ Please refer to step 5 in the [installation steps for PowerScale](../../csidrive
1. Update *endpointPort* to match the endpoint port number set in samples/secret/karavi-authorization-config.json
->__Note__: In *my-isilon-settings.yaml*, endpointPort acts as a default value. If endpointPort is not specified in *my-isilon-settings.yaml*, then it should be specified in the *endpoint* parameter of samples/secret/secret.yaml.
+*Notes:*
+> - In *my-isilon-settings.yaml*, endpointPort acts as a default value. If endpointPort is not specified in *my-isilon-settings.yaml*, then it should be specified in the *endpoint* parameter of samples/secret/secret.yaml.
+> - The *isilon-creds* secret has a *mountEndpoint* parameter which must be set to the hostname or IP address of the PowerScale OneFS API server, for example, 10.0.0.1.
2. Enable CSM for Authorization and provide *proxyHost* address
@@ -294,7 +303,6 @@ CSM for Authorization has a subset of configuration parameters that can be updat
| certificate.crtFile | String | "" |Path to the host certificate file |
| certificate.keyFile | String | "" |Path to the host private key file |
| certificate.rootCertificate | String | "" |Path to the root CA file |
-| web.sidecarproxyaddr | String |"127.0.0.1:5000/sidecar-proxy:latest" |Docker registry address of the CSM for Authorization sidecar-proxy |
| web.jwtsigningsecret | String | "secret" |The secret used to sign JWT tokens |
Updating configuration parameters can be done by editing the `karavi-config-secret` on the CSM for the Authorization Server. The secret can be queried using k3s and kubectl like so:
@@ -315,7 +323,7 @@ Copy the new, encoded data and edit the `karavi-config-secret` with the new data
Replace the data in `config.yaml` under the `data` field with your new, encoded data. Save the changes and CSM for Authorization will read the changed secret.
->__Note__: If you are updating the signing secret, the tenants need to be updated with new tokens via the `karavictl generate token` command like so:
+>__Note__: If you are updating the signing secret, the tenants need to be updated with new tokens via the `karavictl generate token` command like so. The `--insecure` flag is only necessary if certificates were not provided in `$HOME/.karavi/config.json`
`karavictl generate token --tenant $TenantName --insecure --addr "grpc.${AuthorizationProxyHost}" | jq -r '.Token' > kubectl -n $namespace apply -f -`
diff --git a/content/v3/authorization/design.md b/content/v3/authorization/design.md
index 8d9cd34138..564ac3c4e0 100644
--- a/content/v3/authorization/design.md
+++ b/content/v3/authorization/design.md
@@ -3,7 +3,7 @@ title: Design
linktitle: Design
weight: 1
description: >
- Dell EMC Container Storage Modules (CSM) for Authorization design
+ Dell Technologies (Dell) Container Storage Modules (CSM) for Authorization design
---
Container Storage Modules (CSM) for Authorization is designed as a service mesh solution and consists of many internal components that work together in concert to achieve its overall functionality.
@@ -56,7 +56,7 @@ The mechanism for managing this storage would utilize a CSI Driver.
### CSI Driver
-A CSI Driver supports the Container Service Interface (CSI) specification. Dell EMC provides customers with CSI Drivers for its various storage arrays.
+A CSI Driver supports the Container Service Interface (CSI) specification. Dell provides customers with CSI Drivers for its various storage arrays.
CSM for Authorization intends to support a majority, if not all, of these drivers.
A CSI Driver will typically be configured to communicate directly to its intended storage array and as such will be limited in using only the authentication
@@ -66,7 +66,7 @@ methods supported by the Storage Array itself, e.g. Basic authentication over TL
### Sidecar Proxy
-The CSM for Authorization Sidecar Proxy is a sidecar container that gets "injected" into the CSI Driver's Pod. It acts as a proxy and forwards all requests to a
+The CSM for Authorization Sidecar Proxy is deployed as a sidecar in the CSI Driver's Pod. It acts as a proxy and forwards all requests to a
CSM Authorization Server.
The [CSI Driver section](#csi-driver) noted the limitation of a CSI Driver using Storage Array supported authentication methods only. By nature of being a proxy, the CSM for Authorization
@@ -86,12 +86,9 @@ Inbound requests are expected to originate from the CSM for Authorization Sideca
The [*karavictl*](../cli) CLI (Command Line Interface) application allows Storage Admins to manage and interact with a running CSM for Authorization Server.
-Additionally, *karavictl* provides functionality for supporting the sidecar proxy injection mechanism mentioned above. Injection is discussed in more detail later
-on in this document.
-
### Storage Array
-A Storage Array is typically considered to be one of the various Dell EMC storage offerings, e.g. Dell EMC PowerFlex which is supported by CSM for Authorization
+A Storage Array is typically considered to be one of the various Dell storage offerings, e.g. Dell PowerFlex which is supported by CSM for Authorization
today. Support for more Storage Arrays will come in the future.
## How it Works
diff --git a/content/v3/authorization/troubleshooting.md b/content/v3/authorization/troubleshooting.md
index eef3c64a87..0a47cb4ec8 100644
--- a/content/v3/authorization/troubleshooting.md
+++ b/content/v3/authorization/troubleshooting.md
@@ -6,9 +6,6 @@ Description: >
Troubleshooting guide
---
-- [Running `karavictl inject` leaves the vxflexos-controller in a `Pending` state](#running-karavictl-inject-leaves-the-vxflexos-controller-in-a-pending-state)
-- [Running `karavictl inject` leaves the powermax-controller in a `Pending` state](#running-karavictl-inject-leaves-the-powermax-controller-in-a-pending-state)
-- [Running `karavictl inject` leaves the isilon-controller in a `Pending` state](#running-karavictl-inject-leaves-the-isilon-controller-in-a-pending-state)
- [Running `karavictl tenant` commands result in an HTTP 504 error](#running-karavictl-tenant-commands-result-in-an-http-504-error)
---
@@ -26,153 +23,6 @@ For OPA related logs, run:
$ k3s kubectl logs deploy/proxy-server -n karavi -c opa
```
-### Running "karavictl inject" leaves the vxflexos-controller in a "Pending" state
-This situation may occur when the number of vxflexos-controller pods that are deployed is equal to the number of schedulable nodes.
-```
-$ kubectl get pods -n vxflexos
-
-NAME READY STATUS RESTARTS AGE
-vxflexos-controller-696cc5945f-4t94d 0/6 Pending 0 3m2s
-vxflexos-controller-75cdcbc5db-k25zx 5/5 Running 0 3m41s
-vxflexos-controller-75cdcbc5db-nkxqh 5/5 Running 0 3m42s
-vxflexos-node-mjc74 3/3 Running 0 2m44s
-vxflexos-node-zgswp 3/3 Running 0 2m44s
-```
-
-__Resolution__
-
-To resolve this issue, we need to temporarily reduce the number of replicas that the driver deployment is using.
-
-1. Edit the deployment
- ```
- $ kubectl edit -n vxflexos deploy/vxflexos-controller
- ```
-
-2. Find `replicas` under the `spec` section of the deployment manifest.
-3. Reduce the number of `replicas` by 1
-4. Save the file
-5. Confirm that the updated controller pods have been deployed
- ```
- $ kubectl get pods -n vxflexos
-
- NAME READY STATUS RESTARTS AGE
- vxflexos-controller-696cc5945f-4t94d 6/6 Running 0 4m41s
- vxflexos-node-mjc74 3/3 Running 0 3m44s
- vxflexos-node-zgswp 3/3 Running 0 3m44s
- ```
-
-6. Edit the deployment again
-7. Find `replicas` under the `spec` section of the deployment manifest.
-8. Increase the number of `replicas` by 1
-9. Save the file
-10. Confirm that the updated controller pods have been deployed
- ```
- $ kubectl get pods -n vxflexos
-
- NAME READY STATUS RESTARTS AGE
- vxflexos-controller-696cc5945f-4t94d 6/6 Running 0 5m41s
- vxflexos-controller-696cc5945f-6xxhb 6/6 Running 0 5m41s
- vxflexos-node-mjc74 3/3 Running 0 4m44s
- vxflexos-node-zgswp 3/3 Running 0 4m44s
- ```
-
-### Running "karavictl inject" leaves the powermax-controller in a "Pending" state
-This situation may occur when the number of powermax-controller pods that are deployed is equal to the number of schedulable nodes.
-```
-$ kubectl get pods -n powermax
-
-NAME READY STATUS RESTARTS AGE
-powermax-controller-58d8779f5d-v7t56 0/6 Pending 0 25s
-powermax-controller-78f749847-jqphx 5/5 Running 0 10m
-powermax-controller-78f749847-w6vp5 5/5 Running 0 10m
-powermax-node-gx5pk 3/3 Running 0 21s
-powermax-node-k5gwc 3/3 Running 0 17s
-```
-
-__Resolution__
-
-To resolve this issue, we need to temporarily reduce the number of replicas that the driver deployment is using.
-
-1. Edit the deployment
- ```
- $ kubectl edit -n powermax deploy/powermax-controller
- ```
-
-2. Find `replicas` under the `spec` section of the deployment manifest.
-3. Reduce the number of `replicas` by 1
-4. Save the file
-5. Confirm that the updated controller pods have been deployed
- ```
- $ kubectl get pods -n powermax
- NAME READY STATUS RESTARTS AGE
- powermax-controller-58d8779f5d-cqx8d 6/6 Running 0 22s
- powermax-node-gx5pk 3/3 Running 3 8m3s
- powermax-node-k5gwc 3/3 Running 3 7m59s
- ```
-
-6. Edit the deployment again
-7. Find `replicas` under the `spec` section of the deployment manifest.
-8. Increase the number of `replicas` by 1
-9. Save the file
-10. Confirm that the updated controller pods have been deployed
- ```
- $ kubectl get pods -n powermax
- NAME READY STATUS RESTARTS AGE
- powermax-controller-58d8779f5d-cqx8d 6/6 Running 0 22s
- powermax-controller-58d8779f5d-v7t56 6/6 Running 22 8m7s
- powermax-node-gx5pk 3/3 Running 3 8m3s
- powermax-node-k5gwc 3/3 Running 3 7m59s
- ```
-
-### Running "karavictl inject" leaves the isilon-controller in a "Pending" state
-This situation may occur when the number of Isilon controller pods that are deployed is equal to the number of schedulable nodes.
-```
-$ kubectl get pods -n isilon
-
-NAME READY STATUS RESTARTS AGE
-isilon-controller-58d8779f5d-v7t56 0/6 Pending 0 25s
-isilon-controller-78f749847-jqphx 5/5 Running 0 10m
-isilon-controller-78f749847-w6vp5 5/5 Running 0 10m
-isilon-node-gx5pk 3/3 Running 0 21s
-isilon-node-k5gwc 3/3 Running 0 17s
-```
-
-__Resolution__
-
-To resolve this issue, we need to temporarily reduce the number of replicas that the driver deployment is using.
-
-1. Edit the deployment
- ```
- $ kubectl edit -n deploy/isilon-controller
- ```
-
-2. Find `replicas` under the `spec` section of the deployment manifest.
-3. Reduce the number of `replicas` by 1
-4. Save the file
-5. Confirm that the updated controller pods have been deployed
- ```
- $ kubectl get pods -n isilon
-
- NAME READY STATUS RESTARTS AGE
- isilon-controller-696cc5945f-4t94d 6/6 Running 0 4m41s
- isilon-node-mjc74 3/3 Running 0 3m44s
- isilon-node-zgswp 3/3 Running 0 3m44s
- ```
-
-6. Edit the deployment again
-7. Find `replicas` under the `spec` section of the deployment manifest.
-8. Increase the number of `replicas` by 1
-9. Save the file
-10. Confirm that the updated controller pods have been deployed
- ```
- $ kubectl get pods -n isilon
- NAME READY STATUS RESTARTS AGE
- isilon-controller-58d8779f5d-cqx8d 6/6 Running 0 22s
- isilon-controller-58d8779f5d-v7t56 6/6 Running 22 8m7s
- isilon-node-gx5pk 3/3 Running 3 8m3s
- isilon-node-k5gwc 3/3 Running 3 7m59s
- ```
-
### Running "karavictl tenant" commands result in an HTTP 504 error
This situation may occur if there are Iptables or other firewall rules preventing communication with the provided ``:
```
diff --git a/content/v3/authorization/uninstallation.md b/content/v3/authorization/uninstallation.md
index 4b8fad3b53..fcbcb37aa2 100644
--- a/content/v3/authorization/uninstallation.md
+++ b/content/v3/authorization/uninstallation.md
@@ -3,7 +3,7 @@ title: Uninstallation
linktitle: Uninstallation
weight: 2
description: >
- Dell EMC Container Storage Modules (CSM) for Authorization Uninstallation
+ Dell Technologies (Dell) Container Storage Modules (CSM) for Authorization Uninstallation
---
This section outlines the uninstallation steps for Container Storage Modules (CSM) for Authorization.
diff --git a/content/v3/authorization/upgrade.md b/content/v3/authorization/upgrade.md
index ba9a487365..4c31e3a926 100644
--- a/content/v3/authorization/upgrade.md
+++ b/content/v3/authorization/upgrade.md
@@ -3,12 +3,12 @@ title: Upgrade
linktitle: Upgrade
weight: 3
description: >
- Upgrade Dell EMC Container Storage Modules (CSM) for Authorization
+ Upgrade Dell Technologies (Dell) Container Storage Modules (CSM) for Authorization
---
This section outlines the upgrade steps for Container Storage Modules (CSM) for Authorization. The upgrade of CSM for Authorization is handled in 2 parts:
- Upgrading the CSM for Authorization proxy server
-- Upgrading the Dell EMC CSI drivers with CSM for Authorization enabled
+- Upgrading the Dell CSI drivers with CSM for Authorization enabled
### Upgrading CSM for Authorization proxy server
@@ -29,7 +29,7 @@ k3s kubectl version
>__Note__: The above steps manage install and upgrade of all dependencies that are required by the CSM for Authorization proxy server.
-### Upgrading Dell EMC CSI Driver(s) with CSM for Authorization enabled
+### Upgrading Dell CSI Driver(s) with CSM for Authorization enabled
Given a setup where the CSM for Authorization proxy server is already upgraded to the latest version, follow the upgrade instructions for the applicable CSI Driver(s) to upgrade the driver and the CSM for Authorization sidecar
diff --git a/content/v3/contributionguidelines/_index.md b/content/v3/contributionguidelines/_index.md
index 19b639c316..e02b519065 100644
--- a/content/v3/contributionguidelines/_index.md
+++ b/content/v3/contributionguidelines/_index.md
@@ -3,7 +3,7 @@ title: "Contribution Guidelines"
linkTitle: "Contribution Guidelines"
weight: 12
Description: >
- Dell EMC Container Storage Modules (CSM) docs Contribution Guidelines
+ Dell Technologies (Dell) Container Storage Modules (CSM) docs Contribution Guidelines
---
diff --git a/content/v3/csidriver/_index.md b/content/v3/csidriver/_index.md
index a778a41266..495c29b500 100644
--- a/content/v3/csidriver/_index.md
+++ b/content/v3/csidriver/_index.md
@@ -2,11 +2,11 @@
---
title: "CSI Drivers"
linkTitle: "CSI Drivers"
-description: About Dell EMC CSI Drivers
+description: About Dell Technologies (Dell) CSI Drivers
weight: 3
---
-The CSI Drivers by Dell EMC implement an interface between [CSI](https://kubernetes-csi.github.io/docs/) (CSI spec v1.5) enabled Container Orchestrator (CO) and Dell EMC Storage Arrays. It is a plug-in that is installed into Kubernetes to provide persistent storage using Dell storage system.
+The CSI Drivers by Dell implement an interface between [CSI](https://kubernetes-csi.github.io/docs/) (CSI spec v1.5) enabled Container Orchestrator (CO) and Dell Storage Arrays. It is a plug-in that is installed into Kubernetes to provide persistent storage using Dell storage system.
![CSI Architecture](Architecture_Diagram.png)
@@ -14,54 +14,57 @@ The CSI Drivers by Dell EMC implement an interface between [CSI](https://kuberne
### Supported Operating Systems/Container Orchestrator Platforms
{{}}
-| | PowerMax | PowerFlex | Unity| PowerScale | PowerStore |
+| | PowerMax | PowerFlex | Unity | PowerScale | PowerStore |
|---------------|:----------------:|:-------------------:|:----------------:|:-----------------:|:----------------:|
-| Kubernetes | 1.20, 1.21, 1.22 | 1.20, 1.21, 1.22 | 1.20, 1.21, 1.22 | 1.20, 1.21, 1.22 | 1.20, 1.21, 1.22 |
+| Kubernetes | 1.21, 1.22, 1.23 | 1.21, 1.22, 1.23 | 1.21, 1.22, 1.23 | 1.21, 1.22, 1.23 | 1.21, 1.22, 1.23 |
| RHEL | 7.x,8.x | 7.x,8.x | 7.x,8.x | 7.x,8.x | 7.x,8.x |
-| Ubuntu | 20.04 | 20.04 | 18.04, 20.04 | 18.04, 20.04 | 20.04 |
+| Ubuntu | 20.04 | 20.04 | 18.04, 20.04 | 18.04, 20.04 | 20.04 |
| CentOS | 7.8, 7.9 | 7.8, 7.9 | 7.8, 7.9 | 7.8, 7.9 | 7.8, 7.9 |
-| SLES | 15SP3 | 15SP3 | 15SP3 | 15SP3 | 15SP3 |
-| Red Hat OpenShift | 4.8, 4.8 EUS, 4.9 | 4.8, 4.8 EUS, 4.9 | 4.8, 4.8 EUS, 4.9 | 4.8, 4.8 EUS, 4.9 | 4.8, 4.8 EUS, 4.9 |
-| Mirantis Kubernetes Engine | 3.4.x | 3.4.x | 3.4.x | 3.4.x | 3.4.x |
-| Google Anthos | 1.6 | 1.8 | no | 1.9 | 1.9 |
-| VMware Tanzu | no | no | NFS | NFS | NFS |
-| Rancher Kubernetes Engine | yes | yes | yes | yes | yes |
+| SLES | 15SP3 | 15SP3 | 15SP3 | 15SP3 | 15SP3 |
+| Red Hat OpenShift | 4.8, 4.8 EUS, 4.9 | 4.8, 4.8 EUS, 4.9 | 4.8, 4.8 EUS, 4.9 | 4.8, 4.8 EUS, 4.9 | 4.8, 4.8 EUS, 4.9 |
+| Mirantis Kubernetes Engine | 3.4.x | 3.4.x | 3.5.x | 3.4.x | 3.4.x |
+| Google Anthos | 1.6 | 1.8 | no | 1.9 | 1.9 |
+| VMware Tanzu | no | no | NFS | NFS | NFS |
+| Rancher Kubernetes Engine | yes | yes | yes | yes | yes |
+| Amazon Elastic Kubernetes Service
Anywhere | no | yes | no | no | yes |
+
{{
}}
### CSI Driver Capabilities
{{}}
-| Features | PowerMax | PowerFlex | Unity | PowerScale | PowerStore |
-|--------------------------|:--------:|:------------------:|:---------:|:-----------------:|:----------:|
-| CSI Specification | v1.5 | v1.5| v1.5 | v1.5 | v1.5 |
-| Static Provisioning | yes | yes| yes | yes | yes |
-| Dynamic Provisioning | yes | yes| yes | yes | yes |
-| Expand Persistent Volume | yes | yes| yes | yes | yes |
-| Create VolumeSnapshot | yes | yes| yes | yes | yes |
-| Create Volume from Snapshot | yes | yes| yes | yes | yes |
-| Delete Snapshot | yes | yes| yes | yes | yes |
-| [Access Mode](https://kubernetes.io/docs/concepts/storage/persistent-volumes/#access-modes) | RWO
(FC/iSCSI)
RWO/
RWX/
ROX
(Raw block) | RWO
RWO/
RWX/
ROX/
RWOP
(Raw block) | RWO/RWOP
(FC/iSCSI)
RWO/RWX/
RWOP
(RawBlock)
RWO/RWX/ROX/
RWOP
(NFS) | RWO/RWX/ROX/
RWOP | RWO/RWOP
(FC/iSCSI)
RWO/
RWX/
ROX/
RWOP
(RawBlock, NFS) |
-| CSI Volume Cloning | yes | yes | yes | yes | yes |
-| CSI Raw Block Volume | yes | yes | yes | no | yes |
-| CSI Ephemeral Volume | no | yes | yes | yes | yes |
-| Topology | yes | yes | yes | yes | yes |
-| Multi-array | yes | yes | yes | yes | yes |
-| Volume Health Monitoring | no | yes | yes | yes | yes |
+| Features | PowerMax | PowerFlex | Unity | PowerScale | PowerStore |
+|--------------------------|:--------:|:---------:|:------:|:----------:|:----------:|
+| CSI Driver version | 2.2.0 | 2.2.0 | 2.2.0 | 2.2.0 | 2.2.0 |
+| Static Provisioning | yes | yes | yes | yes | yes |
+| Dynamic Provisioning | yes | yes | yes | yes | yes |
+| Expand Persistent Volume | yes | yes | yes | yes | yes |
+| Create VolumeSnapshot | yes | yes | yes | yes | yes |
+| Create Volume from Snapshot | yes | yes | yes | yes | yes |
+| Delete Snapshot | yes | yes | yes | yes | yes |
+| [Access Mode](https://kubernetes.io/docs/concepts/storage/persistent-volumes/#access-modes)| RWO/
RWOP(FC/iSCSI)
RWO/
RWX/
ROX/
RWOP(Raw block) | RWO/ROX/RWOP
RWX (Raw block only) | RWO/ROX/RWOP
RWX (Raw block & NFS only) | RWO/RWX/ROX/
RWOP | RWO/RWOP
(FC/iSCSI)
RWO/
RWX/
ROX/
RWOP
(RawBlock, NFS) |
+| CSI Volume Cloning | yes | yes | yes | yes | yes |
+| CSI Raw Block Volume | yes | yes | yes | no | yes |
+| CSI Ephemeral Volume | no | yes | yes | yes | yes |
+| Topology | yes | yes | yes | yes | yes |
+| Multi-array | yes | yes | yes | yes | yes |
+| Volume Health Monitoring | yes | yes | yes | yes | yes |
{{
}}
### Supported Storage Platforms
{{}}
-| | PowerMax | PowerFlex | Unity| PowerScale | PowerStore |
-|---------------|:----------------:|:-------------------:|:----------------:|:-----------------:|:----------------:|
-| Storage Array |5978.479.479, 5978.669.669, 5978.711.711, Unisphere 9.2| 3.5.x, 3.6.x | 5.0.5, 5.0.6, 5.0.7, 5.1.0 | OneFS 8.1, 8.2, 9.0, 9.1, 9.2, 9.3 | 1.0.x, 2.0.x |
+| | PowerMax | PowerFlex | Unity | PowerScale | PowerStore |
+|---------------|:-------------------------------------------------------:|:----------------:|:--------------------------:|:----------------------------------:|:----------------:|
+| Storage Array |5978.479.479, 5978.711.711
Unisphere 9.2| 3.5.x, 3.6.x | 5.0.7, 5.1.0, 5.1.2 | OneFS 8.1, 8.2, 9.0, 9.1, 9.2, 9.3 | 1.0.x, 2.0.x, 2.1.x |
{{
}}
### Backend Storage Details
{{}}
-| Features | PowerMax | PowerFlex | Unity | PowerScale| PowerStore |
+| Features | PowerMax | PowerFlex | Unity | PowerScale | PowerStore |
|---------------|:----------------:|:------------------:|:----------------:|:----------------:|:----------------:|
| Fibre Channel | yes | N/A | yes | N/A | yes |
| iSCSI | yes | N/A | yes | N/A | yes |
+| NVMeTCP | N/A | N/A | N/A | N/A | yes |
| NFS | N/A | N/A | yes | yes | yes |
| Other | N/A | ScaleIO protocol | N/A | N/A | N/A |
-| Supported FS | ext4 / xfs | ext4 / xfs | ext3 / ext4 / xfs / NFS | NFS | ext3 / ext4 / xfs / NFS |
-| Thin / Thick provisioning | Thin | Thin | Thin/Thick | N/A | Thin |
+| Supported FS | ext4 / xfs | ext4 / xfs | ext3 / ext4 / xfs / NFS | NFS | ext3 / ext4 / xfs / NFS |
+| Thin / Thick provisioning | Thin | Thin | Thin/Thick | N/A | Thin |
| Platform-specific configurable settings | Service Level selection
iSCSI CHAP | - | Host IO Limit
Tiering Policy
NFS Host IO size
Snapshot Retention duration | Access Zone
NFS version (3 or 4);Configurable Export IPs | iSCSI CHAP |
{{
}}
diff --git a/content/v3/csidriver/features/powerflex.md b/content/v3/csidriver/features/powerflex.md
index c92a4d993c..6353aa6f58 100644
--- a/content/v3/csidriver/features/powerflex.md
+++ b/content/v3/csidriver/features/powerflex.md
@@ -7,7 +7,7 @@ Description: Code features for PowerFlex Driver
## Volume Snapshot Feature
-The CSI PowerFlex driver version 2.0 and higher supports v1 snapshots on Kubernetes 1.20/1.21/1.22.
+The CSI PowerFlex driver version 2.0 and higher supports v1 snapshots on Kubernetes 1.21/1.22/1.23.
In order to use Volume Snapshots, ensure the following components are deployed to your cluster:
- Kubernetes Volume Snapshot CRDs
@@ -84,26 +84,25 @@ spec:
This feature extends CSI specification to add the capability to create crash-consistent snapshots of a group of volumes. This feature is available as a technical preview. To use this feature, users have to deploy the csi-volumegroupsnapshotter side-car as part of the PowerFlex driver. Once the sidecar has been deployed, users can make snapshots by using yaml files such as this one:
```
-apiVersion: volumegroup.storage.dell.com/v1alpha2
+apiVersion: volumegroup.storage.dell.com/v1
kind: DellCsiVolumeGroupSnapshot
metadata:
- # Name must be 13 characters or less in length
name: "vg-snaprun1"
namespace: "helmtest-vxflexos"
spec:
# Add fields here
driverName: "csi-vxflexos.dellemc.com"
# defines how to process VolumeSnapshot members when volume group snapshot is deleted
- # "retain" - keep VolumeSnapshot instances
- # "delete" - delete VolumeSnapshot instances
- memberReclaimPolicy: "retain"
+ # "Retain" - keep VolumeSnapshot instances
+ # "Delete" - delete VolumeSnapshot instances
+ memberReclaimPolicy: "Retain"
volumesnapshotclass: "vxflexos-snapclass"
pvcLabel: "vgs-snap-label"
# pvcList:
# - "pvcName1"
# - "pvcName2"
```
-In the metadata section, the name is limited to 13 characters because the snapshotter will append a timestamp to it. Additionally, the pvcLabel field specifies a label that must be present in PVCs that are to be snapshotted. Here is a sample of that portion of a .yaml for a PVC:
+The pvcLabel field specifies a label that must be present in PVCs that are to be snapshotted. Here is a sample of that portion of a .yaml for a PVC:
```
metadata:
name: pvol0
@@ -291,7 +290,7 @@ metadata:
annotations:
meta.helm.sh/release-name: vxflexos
meta.helm.sh/release-namespace: vxflexos
- storageclass.beta.kubernetes.io/is-default-class: "true"
+ storageclass.kubernetes.io/is-default-class: "true"
creationTimestamp: "2020-05-27T13:24:55Z"
labels:
app.kubernetes.io/managed-by: Helm
diff --git a/content/v3/csidriver/features/powermax.md b/content/v3/csidriver/features/powermax.md
index 315786176c..55a57131c9 100644
--- a/content/v3/csidriver/features/powermax.md
+++ b/content/v3/csidriver/features/powermax.md
@@ -122,7 +122,7 @@ When challenged, the host initiator transmits a CHAP credential and CHAP secret
## Custom Driver Name
-With version 1.3.0 of the driver, a custom name can be assigned to the driver at the time of installation. This enables installation of the CSI driver in a different namespace and installation of multiple CSI drivers for Dell EMC PowerMax in the same Kubernetes/OpenShift cluster.
+With version 1.3.0 of the driver, a custom name can be assigned to the driver at the time of installation. This enables installation of the CSI driver in a different namespace and installation of multiple CSI drivers for Dell PowerMax in the same Kubernetes/OpenShift cluster.
To use this feature, set the following values under `customDriverName` in `my-powermax-settings.yaml`.
- Value: Set this to the custom name of the driver.
@@ -140,7 +140,7 @@ For example, if the driver name is set to _driver_ and it is installed in the na
### Install multiple drivers
-To install multiple CSI Drivers for Dell EMC PowerMax in a single Kubernetes cluster, you can take advantage of the custom driver name feature. There are a few important restrictions that should be strictly adhered to:
+To install multiple CSI Drivers for Dell PowerMax in a single Kubernetes cluster, you can take advantage of the custom driver name feature. There are a few important restrictions that should be strictly adhered to:
- Only one driver can be installed in a single namespace
- Different drivers should not connect to a single Unisphere server
- Different drivers should not be used to manage a single PowerMax array
@@ -176,7 +176,7 @@ kind: StorageClass
metadata:
name: powermax-expand-sc
annotations:
- storageclass.beta.kubernetes.io/is-default-class: false
+ storageclass.kubernetes.io/is-default-class: false
provisioner: csi-powermax.dellemc.com
reclaimPolicy: Delete
allowVolumeExpansion: true #Set this attribute to true if you plan to expand any PVCs
@@ -458,3 +458,32 @@ To update the log level dynamically, the user has to edit the ConfigMap `powerma
```
kubectl edit configmap -n powermax powermax-config-params
```
+
+## Volume Health Monitoring
+
+CSI Driver for Dell PowerMax 2.2.0 and above supports volume health monitoring. To enable Volume Health Monitoring from the node side, the alpha feature gate CSIVolumeHealth needs to be enabled. To use this feature, set controller.healthMonitor.enabled and node.healthMonitor.enabled to true. To change the monitor interval, set controller.healthMonitor.interval parameter.
+
+## Single Pod Access Mode for PersistentVolumes- ReadWriteOncePod (ALPHA FEATURE)
+
+Use `ReadWriteOncePod(RWOP)` access mode if you want to ensure that only one pod across the whole cluster can read that PVC or write to it. This is only supported for CSI Driver for PowerMax 2.2.0+ and Kubernetes version 1.22+.
+
+To use this feature, enable the ReadWriteOncePod feature gate for kube-apiserver, kube-scheduler, and kubelet, by setting command line arguments:
+`--feature-gates="...,ReadWriteOncePod=true"`
+
+### Creating a PersistentVolumeClaim
+```yaml
+kind: PersistentVolumeClaim
+apiVersion: v1
+metadata:
+ name: single-writer-only
+spec:
+ accessModes:
+ - ReadWriteOncePod # the volume can be mounted as read-write by a single pod across the whole cluster
+ resources:
+ requests:
+ storage: 1Gi
+```
+
+When this feature is enabled, the existing `ReadWriteOnce(RWO)` access mode restricts volume access to a single node and allows multiple pods on the same node to read from and write to the same volume.
+
+To migrate existing PersistentVolumes to use `ReadWriteOncePod`, please follow the instruction from [here](https://kubernetes.io/blog/2021/09/13/read-write-once-pod-access-mode-alpha/#migrating-existing-persistentvolumes).
\ No newline at end of file
diff --git a/content/v3/csidriver/features/powerscale.md b/content/v3/csidriver/features/powerscale.md
index 98536afa97..acaee8b878 100644
--- a/content/v3/csidriver/features/powerscale.md
+++ b/content/v3/csidriver/features/powerscale.md
@@ -129,7 +129,7 @@ Following are the manifests for the Volume Snapshot Class:
1. VolumeSnapshotClass
```yaml
-# For kubernetes version 20 and above (v1 snaps)
+
apiVersion: snapshot.storage.k8s.io/v1
kind: VolumeSnapshotClass
metadata:
@@ -192,6 +192,8 @@ spec:
storage: 5Gi
```
+> Starting from CSI PowerScale driver version 2.2, it is allowed to create PersistentVolumeClaim from VolumeSnapshot with different isi paths i.e., isi paths of the new volume and the VolumeSnapshot can be different.
+
## Volume Expansion
The CSI PowerScale driver version 1.2 and later supports the expansion of Persistent Volumes (PVs). This expansion can be done either online (for example, when a PVC is attached to a node) or offline (for example, when a PVC is not attached to any node).
@@ -206,7 +208,7 @@ kind: StorageClass
metadata:
name: isilon-expand-sc
annotations:
- storageclass.beta.kubernetes.io/is-default-class: "false"
+ storageclass.kubernetes.io/is-default-class: "false"
provisioner: "csi-isilon.dellemc.com"
reclaimPolicy: Delete
parameters:
@@ -424,7 +426,7 @@ For a cluster with multiple network interfaces and if a user wants to segregate
## Volume Limit
-The CSI Driver for Dell EMC PowerScale allows users to specify the maximum number of PowerScale volumes that can be used in a node.
+The CSI Driver for Dell PowerScale allows users to specify the maximum number of PowerScale volumes that can be used in a node.
The user can set the volume limit for a node by creating a node label `max-isilon-volumes-per-node` and specifying the volume limit for that node.
`kubectl label node max-isilon-volumes-per-node=`
@@ -441,7 +443,7 @@ Similarly, users can define the tolerations based on various conditions like mem
## Usage of SmartQuotas to Limit Storage Consumption
-CSI driver for Dell EMC Isilon handles capacity limiting using SmartQuotas feature.
+CSI driver for Dell Isilon handles capacity limiting using SmartQuotas feature.
To use the SmartQuotas feature user can specify the boolean value 'enableQuota' in myvalues.yaml or my-isilon-settings.yaml.
@@ -494,7 +496,7 @@ kubectl edit configmap -n isilon isilon-config-params
## NAT Support
-CSI Driver for Dell EMC PowerScale is supported in the NAT environment.
+CSI Driver for Dell PowerScale is supported in the NAT environment.
## Configurable permissions for volume directory
@@ -531,7 +533,7 @@ Other ways of configuring powerscale volume permissions remain the same as helm-
## PV/PVC Metrics
-CSI Driver for Dell EMC PowerScale 2.1.0 and above supports volume health monitoring. This allows Kubernetes to report on the condition, status and usage of the underlying volumes.
+CSI Driver for Dell PowerScale 2.1.0 and above supports volume health monitoring. This allows Kubernetes to report on the condition, status and usage of the underlying volumes.
For example, if a volume were to be deleted from the array, or unmounted outside of Kubernetes, Kubernetes will now report these abnormal conditions as events.
### This feature can be enabled
@@ -540,7 +542,7 @@ For example, if a volume were to be deleted from the array, or unmounted outside
## Single Pod Access Mode for PersistentVolumes- ReadWriteOncePod (ALPHA FEATURE)
-Use `ReadWriteOncePod(RWOP)` access mode if you want to ensure that only one pod across the whole cluster can read that PVC or write to it. This is only supported for CSI Driver for PowerScale 2.1.0 and Kubernetes version 1.22+.
+Use `ReadWriteOncePod(RWOP)` access mode if you want to ensure that only one pod across the whole cluster can read that PVC or write to it. This is supported for CSI Driver for PowerScale 2.1.0+ and Kubernetes version 1.22+.
To use this feature, enable the ReadWriteOncePod feature gate for kube-apiserver, kube-scheduler, and kubelet, by setting command line arguments:
`--feature-gates="...,ReadWriteOncePod=true"`
diff --git a/content/v3/csidriver/features/powerstore.md b/content/v3/csidriver/features/powerstore.md
index d05d280695..1f5b1fb50e 100644
--- a/content/v3/csidriver/features/powerstore.md
+++ b/content/v3/csidriver/features/powerstore.md
@@ -183,7 +183,7 @@ kind: StorageClass
metadata:
name: powerstore-expand-sc
annotations:
- storageclass.beta.kubernetes.io/is-default-class: false
+ storageclass.kubernetes.io/is-default-class: false
provisioner: csi-powerstore.dellemc.com
reclaimPolicy: Delete
allowVolumeExpansion: true # Set this attribute to true if you plan to expand any PVCs created using this storage class
@@ -340,6 +340,7 @@ To create `NFS` volume you need to provide `nasName:` parameters that point to t
volumeAttributes:
size: "20Gi"
nasName: "csi-nas-name"
+ nfsAcls: "0777"
```
## Controller HA
@@ -413,7 +414,7 @@ allowedTopologies:
- "true"
```
-This example matches all nodes where the driver has a connection to PowerStore with an IP of `127.0.0.1` via FibreChannel. Similar examples can be found in mentioned folder for NFS and iSCSI.
+This example matches all nodes where the driver has a connection to PowerStore with an IP of `127.0.0.1` via FibreChannel. Similar examples can be found in mentioned folder for NFS, iSCSI and NVMe.
You can check what labels your nodes contain by running `kubectl get nodes --show-labels`
@@ -424,7 +425,7 @@ For any additional information about the topology, see the [Kubernetes Topology
## Reuse PowerStore hostname
-The CSI PowerStore driver version 1.2 and later can automatically detect if the current node was already registered as a Host on the storage array before. It will check if Host initiators and node initiators (FC or iSCSI) match. If they do, the driver will not create a new host and will take the existing name of the Host as nodeID.
+The CSI PowerStore driver version 1.2 and later can automatically detect if the current node was already registered as a Host on the storage array before. It will check if Host initiators and node initiators (FC, iSCSI or NVMe) match. If they do, the driver will not create a new host and will take the existing name of the Host as nodeID.
## Multiarray support
@@ -444,8 +445,10 @@ Create a file called `config.yaml` and populate it with the following content
password: "password" # password for connecting to API
skipCertificateValidation: true # use insecure connection or not
default: true # treat current array as a default (would be used by storage classes without arrayIP parameter)
- blockProtocol: "ISCSI" # what SCSI transport protocol use on node side (FC, ISCSI, None, or auto)
- nasName: "nas-server" # what NAS must be used for NFS volumes
+ blockProtocol: "ISCSI" # what transport protocol use on node side (FC, ISCSI, NVMeTCP, None, or auto)
+ nasName: "nas-server" # what NAS must be used for NFS volumes
+ nfsAcls: "0777" # (Optional) defines permissions - POSIX mode bits or NFSv4 ACLs, to be set on NFS target mount directory.
+ # NFSv4 ACls are supported for NFSv4 shares on NFSv4 enabled NAS servers only. POSIX ACLs are not supported and only POSIX mode bits are supported for NFSv3 shares.
- endpoint: "https://10.0.0.2/api/rest"
globalID: "unique"
username: "user"
@@ -604,14 +607,14 @@ kubectl edit configmap -n csi-powerstore powerstore-config-params
## NAT Support
-CSI Driver for Dell EMC Powerstore is supported in the NAT environment for NFS protocol.
+CSI Driver for Dell Powerstore is supported in the NAT environment for NFS protocol.
The user will be able to install the driver and able to create pods.
## PV/PVC Metrics
-CSI Driver for Dell EMC Powerstore 2.1.0 and above supports volume health monitoring. To enable Volume Health Monitoring from the node side, the alpha feature gate CSIVolumeHealth needs to be enabled. To use this feature, set controller.healthMonitor.enabled and node.healthMonitor.enabled to true. To change the monitor interval, set controller.healthMonitor.volumeHealthMonitorInterval parameter.
+CSI Driver for Dell Powerstore 2.1.0 and above supports volume health monitoring. To enable Volume Health Monitoring from the node side, the alpha feature gate CSIVolumeHealth needs to be enabled. To use this feature, set controller.healthMonitor.enabled and node.healthMonitor.enabled to true. To change the monitor interval, set controller.healthMonitor.volumeHealthMonitorInterval parameter.
## Single Pod Access Mode for PersistentVolumes
@@ -638,3 +641,37 @@ spec:
```
>Note: The access mode ReadWriteOnce allows multiple pods to access a single volume within a single worker node and the behavior is consistent across all supported Kubernetes versions.
+
+## POSIX mode bits and NFSv4 ACLs
+
+CSI PowerStore driver version 2.2.0 and later allows users to set user-defined permissions on NFS target mount directory using POSIX mode bits or NFSv4 ACLs.
+
+NFSv4 ACLs are supported for NFSv4 shares on NFSv4 enabled NAS servers only. Please ensure the order when providing the NFSv4 ACLs.
+
+To use this feature, provide permissions in `nfsAcls` parameter in values.yaml, secrets or NFS storage class.
+
+For example:
+
+1. POSIX mode bits
+
+```yaml
+nfsAcls: "0755"
+```
+
+2. NFSv4 ACLs
+
+```yaml
+nfsAcls: "A::OWNER@:rwatTnNcCy,A::GROUP@:rxtncy,A::EVERYONE@:rxtncy,A::user@domain.com:rxtncy"
+```
+
+>Note: If no values are specified, default value of "0777" will be set.
+>POSIX ACLs are not supported and only POSIX mode bits are supported for NFSv3 shares.
+
+
+## NVMe/TCP Support
+
+CSI Driver for Dell Powerstore 2.2.0 and above supports NVMe/TCP provisioning. To enable NVMe/TCP provisioning, blockProtocol on secret should be specified as `NVMeTCP`.
+In case blockProtocol is specified as `auto`, the driver will be able to find the initiators on the host and choose the protocol accordingly. If the host has multiple protocols enabled, then FC gets the highest priority followed by iSCSI and then NVMeTCP.
+
+>Note: NVMe/TCP is not supported on RHEL 7.x versions and CoreOS.
+>NVMe/TCP is supported with Powerstore 2.1 and above.
diff --git a/content/v3/csidriver/features/unity.md b/content/v3/csidriver/features/unity.md
index b24ad1c022..7559245396 100644
--- a/content/v3/csidriver/features/unity.md
+++ b/content/v3/csidriver/features/unity.md
@@ -185,12 +185,12 @@ kind: StorageClass
metadata:
name: unity-expand-sc
annotations:
- storageclass.beta.kubernetes.io/is-default-class: false
+ storageclass.kubernetes.io/is-default-class: false
provisioner: csi-unity.dellemc.com
reclaimPolicy: Delete
allowVolumeExpansion: true # Set this attribute to true if you plan to expand any PVCs created using this storage class
parameters:
- FsType: xfs
+ csi.storage.k8s.io/fstype: "xfs"
```
To resize a PVC, edit the existing PVC spec and set spec.resources.requests.storage to the intended size. For example, if you have a PVC unity-pvc-demo of size 3Gi, then you can resize it to 30Gi by updating the PVC.
@@ -215,7 +215,7 @@ spec:
## Raw block support
-The CSI Unity driver version 1.4 and later supports Raw Block Volumes.
+The CSI Unity driver supports Raw Block Volumes.
Raw Block volumes are created using the volumeDevices list in the pod template spec with each entry accessing a volumeClaimTemplate specifying a volumeMode: Block. The following is an example configuration:
```yaml
@@ -310,7 +310,7 @@ spec:
## Ephemeral Inline Volume
-The CSI Unity driver version 1.4 and later supports ephemeral inline CSI volumes. This feature allows CSI volumes to be specified directly in the pod specification.
+The CSI Unity driver supports ephemeral inline CSI volumes. This feature allows CSI volumes to be specified directly in the pod specification.
At runtime, nested inline volumes follow the ephemeral lifecycle of their associated pods where the driver handles all phases of volume operations as pods are created and destroyed.
@@ -353,7 +353,7 @@ To create `NFS` volume you need to provide `nasName:` parameters that point to t
- name: volume
csi:
driver: csi-unity.dellemc.com
- fsType: "nfs"
+ csi.storage.k8s.io/fstype: "nfs"
volumeAttributes:
size: "20Gi"
nasName: "csi-nas-name"
@@ -361,7 +361,7 @@ To create `NFS` volume you need to provide `nasName:` parameters that point to t
## Controller HA
-The CSI Unity driver version 1.4 and later supports the controller HA feature. Instead of StatefulSet controller pods deployed as a Deployment.
+The CSI Unity driver supports controller HA feature. Instead of StatefulSet controller pods deployed as a Deployment.
By default, number of replicas is set to 2, you can set the `controllerCount` parameter to 1 in `myvalues.yaml` if you want to disable controller HA for your installation. When installing via Operator you can change the `replicas` parameter in the `spec.driver` section in your Unity Custom Resource.
@@ -407,7 +407,7 @@ As said before you can configure where node driver pods would be assigned in a s
## Topology
-The CSI Unity driver version 1.4 and later supports Topology which forces volumes to be placed on worker nodes that have connectivity to the backend storage. This covers use cases where users have chosen to restrict the nodes on which the CSI driver is deployed.
+The CSI Unity driver supports Topology which forces volumes to be placed on worker nodes that have connectivity to the backend storage. This covers use cases where users have chosen to restrict the nodes on which the CSI driver is deployed.
This Topology support does not include customer-defined topology, users cannot create their own labels for nodes, they should use whatever labels are returned by the driver and applied automatically by Kubernetes on its nodes.
@@ -441,37 +441,23 @@ You can check what labels your nodes contain by running `kubectl get nodes --sho
For any additional information about the topology, see the [Kubernetes Topology documentation](https://kubernetes-csi.github.io/docs/topology.html).
-## Support for SLES 15 SP2
-
-The CSI Driver for Dell EMC Unity requires the following set of packages installed on all worker nodes that run on SLES 15 SP2.
-
- - open-iscsi **open-iscsi is required in order to make use of iSCSI protocol for provisioning**
- - nfs-utils **nfs-utils is required in order to make use of NFS protocol for provisioning**
- - multipath-tools **multipath-tools is required in order to make use of FC and iSCSI protocols for provisioning**
-
- After installing open-iscsi, ensure "iscsi" and "iscsid" services have been started and /etc/isci/initiatorname.iscsi is created and has the host initiator id. The pre-requisites are mandatory for provisioning with the iSCSI protocol to work.
-
## Volume Limit
-The CSI Driver for Dell EMC Unity allows users to specify the maximum number of Unity volumes that can be used in a node.
+The CSI Driver for Dell Unity allows users to specify the maximum number of Unity volumes that can be used in a node.
The user can set the volume limit for a node by creating a node label `max-unity-volumes-per-node` and specifying the volume limit for that node.
`kubectl label node max-unity-volumes-per-node=`
The user can also set the volume limit for all the nodes in the cluster by specifying the same to `maxUnityVolumesPerNode` attribute in values.yaml file.
->**NOTE:**
To reflect the changes after setting the value either via node label or in values.yaml file, user has to bounce the driver controller and node pods using the command `kubectl get pods -n unity --no-headers=true | awk '/unity-/{print $1}'| xargs kubectl delete -n unity pod`.
If the value is set both by node label and values.yaml file then node label value will get the precedence and user has to remove the node label in order to reflect the values.yaml value.
The default value of `maxUnityVolumesPerNode` is 0.
If `maxUnityVolumesPerNode` is set to zero, then CO SHALL decide how many volumes of this type can be published by the controller to the node.
The volume limit specified to `maxUnityVolumesPerNode` attribute is applicable to all the nodes in the cluster for which node label `max-unity-volumes-per-node` is not set.
+>**NOTE:**
To reflect the changes after setting the value either via node label or in values.yaml file, user has to bounce the driver controller and node pods using the command `kubectl get pods -n unity --no-headers=true | awk '/unity-/{print $1}'| xargs kubectl delete -n unity pod`.
If the value is set both by node label and values.yaml file then node label value will get the precedence and user has to remove the node label in order to reflect the values.yaml value.
The default value of `maxUnityVolumesPerNode` is 0.
If `maxUnityVolumesPerNode` is set to zero, then Container Orchestration decides how many volumes of this type can be published by the controller to the node.
The volume limit specified to `maxUnityVolumesPerNode` attribute is applicable to all the nodes in the cluster for which node label `max-unity-volumes-per-node` is not set.
## NAT Support
-CSI Driver for Dell EMC Unity is supported in the NAT environment for NFS protocol.
+CSI Driver for Dell Unity is supported in the NAT environment for NFS protocol.
The user will be able to install the driver and able to create pods.
-## Dynamic Logging Configuration
-
-This feature is introduced in CSI Driver for unity version 2.0.0.
-
## Single Pod Access Mode for PersistentVolumes
-CSI Driver for Unity now supports a new accessmode `ReadWriteOncePod` for PersistentVolumes and PersistentVolumeClaims. With this feature, CSI Driver for Unity allows to restrict volume access to a single pod in the cluster
+CSI Driver for Unity supports a new accessmode `ReadWriteOncePod` for PersistentVolumes and PersistentVolumeClaims. With this feature, CSI Driver for Unity allows to restrict volume access to a single pod in the cluster
Prerequisites
1. Enable the ReadWriteOncePod feature gate for kube-apiserver, kube-scheduler, and kubelet as the ReadWriteOncePod access mode is in alpha for Kubernetes v1.22 and is only supported for CSI volumes. You can enable the feature by setting command line arguments:
@@ -491,12 +477,14 @@ spec:
```
## Volume Health Monitoring
-CSI Driver for Unity now supports volume health monitoring. This is an alpha feature and requires feature gate to be enabled by setting command line arguments `--feature-gates="...,CSIVolumeHealth=true"`.
+CSI Driver for Unity supports volume health monitoring. This is an alpha feature and requires feature gate to be enabled by setting command line arguments `--feature-gates="...,CSIVolumeHealth=true"`.
This feature:
1. Reports on the condition of the underlying volumes via events when a volume condition is abnormal. We can watch the events on the describe of pvc `kubectl describe pvc -n `
2. Collects the volume stats. We can see the volume usage in the node logs `kubectl logs -n -c driver`
-By default this is disabled in CSI Driver for Unity. You will have to set the `volumeHealthMonitor.enable` flag for controller, node or for both in `values.yaml` to get the volume stats and volume condition.
+By default this is disabled in CSI Driver for Unity. You will have to set the `healthMonitor.enable` flag for controller, node or for both in `values.yaml` to get the volume stats and volume condition.
+## Dynamic Logging Configuration
+This feature is introduced in CSI Driver for unity version 2.0.0.
### Helm based installation
As part of driver installation, a ConfigMap with the name `unity-config-params` is created, which contains an attribute `CSI_LOG_LEVEL` which specifies the current log level of CSI driver.
@@ -554,7 +542,7 @@ apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
annotations:
- storageclass.beta.kubernetes.io/is-default-class: "false"
+ storageclass.kubernetes.io/is-default-class: "false"
name: unity-nfs
parameters:
arrayId: "APM0***XXXXXX"
@@ -643,7 +631,7 @@ data:
CSI_LOG_LEVEL: "info"
ALLOW_RWO_MULTIPOD_ACCESS: "false"
MAX_UNITY_VOLUMES_PER_NODE: "0"
- SYNC_NODE_INFO_TIME_INTERVAL: "0"
+ SYNC_NODE_INFO_TIME_INTERVAL: "15"
TENANT_NAME: ""
```
>Note: csi-unity supports Tenancy in multi-array setup, provided the TenantName is the same across Unity instances.
diff --git a/content/v3/csidriver/installation/helm/isilon.md b/content/v3/csidriver/installation/helm/isilon.md
index 966de5509f..08d51943eb 100644
--- a/content/v3/csidriver/installation/helm/isilon.md
+++ b/content/v3/csidriver/installation/helm/isilon.md
@@ -3,7 +3,7 @@ title: PowerScale
description: >
Installing CSI Driver for PowerScale via Helm
---
-The CSI Driver for Dell EMC PowerScale can be deployed by using the provided Helm v3 charts and installation scripts on both Kubernetes and OpenShift platforms. For more detailed information on the installation scripts, review the script [documentation](https://github.com/dell/csi-powerscale/tree/master/dell-csi-helm-installer).
+The CSI Driver for Dell PowerScale can be deployed by using the provided Helm v3 charts and installation scripts on both Kubernetes and OpenShift platforms. For more detailed information on the installation scripts, review the script [documentation](https://github.com/dell/csi-powerscale/tree/master/dell-csi-helm-installer).
The controller section of the Helm chart installs the following components in a _Deployment_ in the specified namespace:
- CSI Driver for PowerScale
@@ -18,16 +18,17 @@ The node section of the Helm chart installs the following component in a _Daemon
## Prerequisites
-The following are requirements to be met before installing the CSI Driver for Dell EMC PowerScale:
+The following are requirements to be met before installing the CSI Driver for Dell PowerScale:
- Install Kubernetes or OpenShift (see [supported versions](../../../../csidriver/#features-and-capabilities))
- Install Helm 3
- Mount propagation is enabled on container runtime that is being used
- If using Snapshot feature, satisfy all Volume Snapshot requirements
- If enabling CSM for Authorization, please refer to the [Authorization deployment steps](../../../../authorization/deployment/) first
+- If enabling CSM for Replication, please refer to the [Replication deployment steps](../../../../replication/deployment/) first
### Install Helm 3.0
-Install Helm 3.0 on the master node before you install the CSI Driver for Dell EMC PowerScale.
+Install Helm 3.0 on the master node before you install the CSI Driver for Dell PowerScale.
**Steps**
@@ -44,20 +45,50 @@ controller:
```
#### Volume Snapshot CRD's
-The Kubernetes Volume Snapshot CRDs can be obtained and installed from the external-snapshotter project on Github. Manifests are available here:[v4.2.x](https://github.com/kubernetes-csi/external-snapshotter/tree/v4.2.0/client/config/crd)
+The Kubernetes Volume Snapshot CRDs can be obtained and installed from the external-snapshotter project on Github. Manifests are available here:[v5.0.x](https://github.com/kubernetes-csi/external-snapshotter/tree/v5.0.1/client/config/crd)
#### Volume Snapshot Controller
The CSI external-snapshotter sidecar is split into two controllers:
- A common snapshot controller
- A CSI external-snapshotter sidecar
-The common snapshot controller must be installed only once in the cluster irrespective of the number of CSI drivers installed in the cluster. On OpenShift clusters 4.4 and later, the common snapshot-controller is pre-installed. In the clusters where it is not present, it can be installed using `kubectl` and the manifests are available here: [v4.2.x](https://github.com/kubernetes-csi/external-snapshotter/tree/v4.2.0/deploy/kubernetes/snapshot-controller)
+The common snapshot controller must be installed only once in the cluster irrespective of the number of CSI drivers installed in the cluster. On OpenShift clusters 4.4 and later, the common snapshot-controller is pre-installed. In the clusters where it is not present, it can be installed using `kubectl` and the manifests are available here: [v5.0.x](https://github.com/kubernetes-csi/external-snapshotter/tree/v5.0.1/deploy/kubernetes/snapshot-controller)
*NOTE:*
- The manifests available on GitHub install the snapshotter image:
- [quay.io/k8scsi/csi-snapshotter:v4.0.x](https://quay.io/repository/k8scsi/csi-snapshotter?tag=v4.0.0&tab=tags)
- The CSI external-snapshotter sidecar is still installed along with the driver and does not involve any extra configuration.
+## Volume Health Monitoring
+
+Volume Health Monitoring feature is optional and by default this feature is disabled for drivers when installed via helm.
+To enable this feature, add the below block to the driver manifest before installing the driver. This ensures to install external
+health monitor sidecar. To get the volume health state value under controller should be set to true as seen below. To get the
+volume stats value under node should be set to true.
+ ```yaml
+controller:
+ healthMonitor:
+ # enabled: Enable/Disable health monitor of CSI volumes
+ # Allowed values:
+ # true: enable checking of health condition of CSI volumes
+ # false: disable checking of health condition of CSI volumes
+ # Default value: None
+ enabled: false
+ # healthMonitorInterval: Interval of monitoring volume health condition
+ # Allowed values: Number followed by unit (s,m,h)
+ # Examples: 60s, 5m, 1h
+ # Default value: 60s
+ interval: 60s
+node:
+ healthMonitor:
+ # enabled: Enable/Disable health monitor of CSI volumes- volume usage, volume condition
+ # Allowed values:
+ # true: enable checking of health condition of CSI volumes
+ # false: disable checking of health condition of CSI volumes
+ # Default value: None
+ enabled: false
+ ```
+
#### Installation example
You can install CRDs and the default snapshot controller by running the following commands:
@@ -65,17 +96,31 @@ You can install CRDs and the default snapshot controller by running the followin
git clone https://github.com/kubernetes-csi/external-snapshotter/
cd ./external-snapshotter
git checkout release-
-kubectl create -f client/config/crd
-kubectl create -f deploy/kubernetes/snapshot-controller
+kubectl kustomize client/config/crd | kubectl create -f -
+kubectl -n kube-system kustomize deploy/kubernetes/snapshot-controller | kubectl create -f -
```
*NOTE:*
-- It is recommended to use 4.2.x version of snapshotter/snapshot-controller.
+- It is recommended to use 5.0.x version of snapshotter/snapshot-controller.
+
+### (Optional) Replication feature Requirements
+
+Applicable only if you decided to enable the Replication feature in `values.yaml`
+
+```yaml
+replication:
+ enabled: true
+```
+#### Replication CRD's
+
+The CRDs for replication can be obtained and installed from the csm-replication project on Github. Use `csm-replication/deploy/replicationcrds.all.yaml` located in the csm-replication git repo for the installation.
+
+CRDs should be configured during replication prepare stage with repctl as described in [install-repctl](../../../../replication/deployment/install-repctl)
## Install the Driver
**Steps**
-1. Run `git clone -b v2.1.0 https://github.com/dell/csi-powerscale.git` to clone the git repository.
+1. Run `git clone -b v2.2.0 https://github.com/dell/csi-powerscale.git` to clone the git repository.
2. Ensure that you have created the namespace where you want to install the driver. You can run `kubectl create namespace isilon` to create a new one. The use of "isilon" as the namespace is just an example. You can choose any name for the namespace.
3. Collect information from the PowerScale Systems like IP address, IsiPath, username, and password. Make a note of the value for these parameters as they must be entered in the *secret.yaml*.
4. Copy *the helm/csi-isilon/values.yaml* into a new location with name say *my-isilon-settings.yaml*, to customize settings for installation.
@@ -93,6 +138,7 @@ kubectl create -f deploy/kubernetes/snapshot-controller
| verbose | Indicates what content of the OneFS REST API message should be logged in debug level logs | Yes | 1 |
| kubeletConfigDir | Specify kubelet config dir path | Yes | "/var/lib/kubelet" |
| enableCustomTopology | Indicates PowerScale FQDN/IP which will be fetched from node label and the same will be used by controller and node pod to establish a connection to Array. This requires enableCustomTopology to be enabled. | No | false |
+ | fsGroupPolicy | Defines which FS Group policy mode to be used, Supported modes `None, File and ReadWriteOnceWithFSType` | No | "ReadWriteOnceWithFSType" |
| ***controller*** | Configure controller pod specific parameters | | |
| controllerCount | Defines the number of csi-powerscale controller pods to deploy to the Kubernetes release| Yes | 2 |
| volumeNamePrefix | Defines a string prefix for the names of PersistentVolumes created | Yes | "k8s" |
@@ -103,6 +149,9 @@ kubectl create -f deploy/kubernetes/snapshot-controller
| healthMonitor.interval | Interval of monitoring volume health condition | Yes | 60s |
| nodeSelector | Define node selection constraints for pods of controller deployment | No | |
| tolerations | Define tolerations for the controller deployment, if required | No | |
+ | leader-election-lease-duration | Duration, that non-leader candidates will wait to force acquire leadership | No | 20s |
+ | leader-election-renew-deadline | Duration, that the acting leader will retry refreshing leadership before giving up | No | 15s |
+ | leader-election-retry-period | Duration, the LeaderElector clients should wait between tries of actions | No | 5s |
| ***node*** | Configure node pod specific parameters | | |
| nodeSelector | Define node selection constraints for pods of node daemonset | No | |
| tolerations | Define tolerations for the node daemonset, if required | No | |
@@ -111,6 +160,7 @@ kubectl create -f deploy/kubernetes/snapshot-controller
| ***PLATFORM ATTRIBUTES*** | | | |
| endpointPort | Define the HTTPs port number of the PowerScale OneFS API server. If authorization is enabled, endpointPort should be the HTTPS localhost port that the authorization sidecar will listen on. This value acts as a default value for endpointPort, if not specified for a cluster config in secret. | No | 8080 |
| skipCertificateValidation | Specify whether the PowerScale OneFS API server's certificate chain and hostname must be verified. This value acts as a default value for skipCertificateValidation, if not specified for a cluster config in secret. | No | true |
+ | isiAuthType | Indicates the authentication method to be used. If set to 1 then it follows as session-based authentication else basic authentication | No | 0 |
| isiAccessZone | Define the name of the access zone a volume can be created in. If storageclass is missing with AccessZone parameter, then value of isiAccessZone is used for the same. | No | System |
| enableQuota | Indicates whether the provisioner should attempt to set (later unset) quota on a newly provisioned volume. This requires SmartQuotas to be enabled.| No | true |
| isiPath | Define the base path for the volumes to be created on PowerScale cluster. This value acts as a default value for isiPath, if not specified for a cluster config in secret| No | /ifs/data/csi |
@@ -121,7 +171,7 @@ kubectl create -f deploy/kubernetes/snapshot-controller
| sidecarProxyImage | Image for csm-authorization-sidecar. | No | " " |
| proxyHost | Hostname of the csm-authorization server. | No | Empty |
| skipCertificateValidation | A boolean that enables/disables certificate validation of the csm-authorization server. | No | true |
-
+
*NOTE:*
- ControllerCount parameter value must not exceed the number of nodes in the Kubernetes cluster. Otherwise, some of the controller pods remain in a "Pending" state till new nodes are available for scheduling. The installer exits with a WARNING on the same.
@@ -141,6 +191,7 @@ kubectl create -f deploy/kubernetes/snapshot-controller
| skipCertificateValidation | Specify whether the PowerScale OneFS API server's certificate chain and hostname must be verified. | No | default value from values.yaml |
| endpointPort | Specify the HTTPs port number of the PowerScale OneFS API server | No | default value from values.yaml |
| isiPath | The base path for the volumes to be created on PowerScale cluster. Note: IsiPath parameter in storageclass, if present will override this attribute. | No | default value from values.yaml |
+ | mountEndpoint | Endpoint of the PowerScale OneFS API server, for example, 10.0.0.1. This must be specified if [CSM-Authorization](https://github.com/dell/karavi-authorization) is enabled. | No | - |
The username specified in *secret.yaml* must be from the authentication providers of PowerScale. The user must have enough privileges to perform the actions. The suggested privileges are as follows:
@@ -164,7 +215,7 @@ Create isilon-creds secret using the following command:
- For the key isiIP/endpoint, the user can give either IP address or FQDN. Also, the user can prefix 'https' (For example, https://192.168.1.1) with the value.
- The *isilon-creds* secret has a *mountEndpoint* parameter which should only be updated and used when [Authorization](../../../../authorization) is enabled.
-7. Install OneFS CA certificates by following the instructions from the next section, if you want to validate OneFS API server's certificates. If not, create an empty secret using the following command and an empty secret must be created for the successful installation of CSI Driver for Dell EMC PowerScale.
+7. Install OneFS CA certificates by following the instructions from the next section, if you want to validate OneFS API server's certificates. If not, create an empty secret using the following command and an empty secret must be created for the successful installation of CSI Driver for Dell PowerScale.
```
kubectl create -f empty-secret.yaml
```
@@ -196,7 +247,7 @@ If the 'skipCertificateValidation' parameter is set to false and a previous inst
### Dynamic update of array details via secret.yaml
-CSI Driver for Dell EMC PowerScale now provides supports for Multi cluster. Now users can link the single CSI Driver to multiple OneFS Clusters by updating *secret.yaml*. Users can now update the isilon-creds secret by editing the *secret.yaml* and executing the following command
+CSI Driver for Dell PowerScale now provides supports for Multi cluster. Now users can link the single CSI Driver to multiple OneFS Clusters by updating *secret.yaml*. Users can now update the isilon-creds secret by editing the *secret.yaml* and executing the following command
`kubectl create secret generic isilon-creds -n isilon --from-file=config=secret.yaml -o yaml --dry-run=client | kubectl apply -f -`
@@ -206,11 +257,11 @@ CSI Driver for Dell EMC PowerScale now provides supports for Multi cluster. Now
## Storage Classes
-The CSI driver for Dell EMC PowerScale version 1.5 and later, `dell-csi-helm-installer` does not create any storage classes as part of the driver installation. A sample storage class manifest is available at `samples/storageclass/isilon.yaml`. Use this sample manifest to create a storageclass to provision storage; uncomment/ update the manifest as per the requirements.
+The CSI driver for Dell PowerScale version 1.5 and later, `dell-csi-helm-installer` does not create any storage classes as part of the driver installation. A sample storage class manifest is available at `samples/storageclass/isilon.yaml`. Use this sample manifest to create a storageclass to provision storage; uncomment/ update the manifest as per the requirements.
### What happens to my existing storage classes?
-*Upgrading from CSI PowerScale v2.0 driver*
+*Upgrading from CSI PowerScale v2.1 driver*:
The storage classes created as part of the installation have an annotation - "helm.sh/resource-policy": keep set. This ensures that even after an uninstall or upgrade, the storage classes are not deleted. You can continue using these storage classes if you wish so.
*NOTE*:
@@ -232,9 +283,9 @@ Starting CSI PowerScale v1.6, `dell-csi-helm-installer` will not create any Volu
### What happens to my existing Volume Snapshot Classes?
-*Upgrading from CSI PowerScale v2.0 driver*:
+*Upgrading from CSI PowerScale v2.1 driver*:
The existing volume snapshot class will be retained.
*Upgrading from an older version of the driver*:
-It is strongly recommended to upgrade the earlier versions of CSI PowerScale to 1.6 or higher before upgrading to 2.1.
+It is strongly recommended to upgrade the earlier versions of CSI PowerScale to 1.6 or higher before upgrading to 2.2.
diff --git a/content/v3/csidriver/installation/helm/powerflex.md b/content/v3/csidriver/installation/helm/powerflex.md
index 06354ccfcb..9bdb0ccdc0 100644
--- a/content/v3/csidriver/installation/helm/powerflex.md
+++ b/content/v3/csidriver/installation/helm/powerflex.md
@@ -5,22 +5,22 @@ description: >
Installing the CSI Driver for PowerFlex via Helm
---
-The CSI Driver for Dell EMC PowerFlex can be deployed by using the provided Helm v3 charts and installation scripts on both Kubernetes and OpenShift platforms. For more detailed information on the installation scripts, review the script [documentation](https://github.com/dell/csi-powerflex/tree/master/dell-csi-helm-installer).
+The CSI Driver for Dell PowerFlex can be deployed by using the provided Helm v3 charts and installation scripts on both Kubernetes and OpenShift platforms. For more detailed information on the installation scripts, review the script [documentation](https://github.com/dell/csi-powerflex/tree/master/dell-csi-helm-installer).
The controller section of the Helm chart installs the following components in a _Deployment_ in the specified namespace:
-- CSI Driver for Dell EMC PowerFlex
+- CSI Driver for Dell PowerFlex
- Kubernetes External Provisioner, which provisions the volumes
- Kubernetes External Attacher, which attaches the volumes to the containers
- Kubernetes External Snapshotter, which provides snapshot support
- Kubernetes External Resizer, which resizes the volume
The node section of the Helm chart installs the following component in a _DaemonSet_ in the specified namespace:
-- CSI Driver for Dell EMC PowerFlex
+- CSI Driver for Dell PowerFlex
- Kubernetes Node Registrar, which handles the driver registration
## Prerequisites
-The following are requirements that must be met before installing the CSI Driver for Dell EMC PowerFlex:
+The following are requirements that must be met before installing the CSI Driver for Dell PowerFlex:
- Install Kubernetes or OpenShift (see [supported versions](../../../../csidriver/#features-and-capabilities))
- Install Helm 3
- Enable Zero Padding on PowerFlex
@@ -33,7 +33,7 @@ The following are requirements that must be met before installing the CSI Driver
### Install Helm 3.0
-Install Helm 3.0 on the master node before you install the CSI Driver for Dell EMC PowerFlex.
+Install Helm 3.0 on the master node before you install the CSI Driver for Dell PowerFlex.
**Steps**
@@ -41,7 +41,7 @@ Install Helm 3.0 on the master node before you install the CSI Driver for Dell E
### Enable Zero Padding on PowerFlex
-Verify that zero padding is enabled on the PowerFlex storage pools that will be used. Use PowerFlex GUI or the PowerFlex CLI to check this setting. For more information to configure this setting, see [Dell EMC PowerFlex documentation](https://cpsdocs.dellemc.com/bundle/PF_CONF_CUST/page/GUID-D32BDFF7-3014-4894-8E1E-2A31A86D343A.html).
+Verify that zero padding is enabled on the PowerFlex storage pools that will be used. Use PowerFlex GUI or the PowerFlex CLI to check this setting. For more information to configure this setting, see [Dell PowerFlex documentation](https://cpsdocs.dellemc.com/bundle/PF_CONF_CUST/page/GUID-D32BDFF7-3014-4894-8E1E-2A31A86D343A.html).
### Install PowerFlex Storage Data Client
@@ -51,17 +51,17 @@ currently only Red Hat CoreOS (RHCOS).
On Kubernetes nodes with OS version not supported by automatic install, you must perform the Manual SDC Deployment steps [below](#manual-sdc-deployment).
Refer to https://hub.docker.com/r/dellemc/sdc for supported OS versions.
-**Optional:** For a typical install, you will pull SDC kernel modules from the Dell EMC FTP site, which is set up by default. Some users might want to mirror this repository to a local location. The [PowerFlex KB article](https://www.dell.com/support/kbdoc/en-us/000184206/how-to-use-a-private-repository-for) has instructions on how to do this.
+**Optional:** For a typical install, you will pull SDC kernel modules from the Dell FTP site, which is set up by default. Some users might want to mirror this repository to a local location. The [PowerFlex KB article](https://www.dell.com/support/kbdoc/en-us/000184206/how-to-use-a-private-repository-for) has instructions on how to do this.
#### Manual SDC Deployment
-For detailed PowerFlex installation procedure, see the [Dell EMC PowerFlex Deployment Guide](https://docs.delltechnologies.com/bundle/VXF_DEPLOY/page/GUID-DD20489C-42D9-42C6-9795-E4694688CC75.html). Install the PowerFlex SDC as follows:
+For detailed PowerFlex installation procedure, see the [Dell PowerFlex Deployment Guide](https://docs.delltechnologies.com/bundle/VXF_DEPLOY/page/GUID-DD20489C-42D9-42C6-9795-E4694688CC75.html). Install the PowerFlex SDC as follows:
**Steps**
-1. Download the PowerFlex SDC from [Dell EMC Online support](https://www.dell.com/support). The filename is EMC-ScaleIO-sdc-*.rpm, where * is the SDC name corresponding to the PowerFlex installation version.
+1. Download the PowerFlex SDC from [Dell Online support](https://www.dell.com/support). The filename is EMC-ScaleIO-sdc-*.rpm, where * is the SDC name corresponding to the PowerFlex installation version.
2. Export the shell variable _MDM_IP_ in a comma-separated list using `export MDM_IP=xx.xxx.xx.xx,xx.xxx.xx.xx`, where xxx represents the actual IP address in your environment. This list contains the IP addresses of the MDMs.
-3. Install the SDC per the _Dell EMC PowerFlex Deployment Guide_:
+3. Install the SDC per the _Dell PowerFlex Deployment Guide_:
- For Red Hat Enterprise Linux and CentOS, run `rpm -iv ./EMC-ScaleIO-sdc-*.x86_64.rpm`, where * is the SDC name corresponding to the PowerFlex installation version.
4. To add more MDM_IP for multi-array support, run `/opt/emc/scaleio/sdc/bin/drv_cfg --add_mdm --ip 10.xx.xx.xx.xx,10.xx.xx.xx`
@@ -77,14 +77,14 @@ controller:
```
#### Volume Snapshot CRD's
-The Kubernetes Volume Snapshot CRDs can be obtained and installed from the external-snapshotter project on Github. Manifests are available here: [v4.2.x](https://github.com/kubernetes-csi/external-snapshotter/tree/v4.2.0/client/config/crd)
+The Kubernetes Volume Snapshot CRDs can be obtained and installed from the external-snapshotter project on Github. Manifests are available here: [v5.0.x](https://github.com/kubernetes-csi/external-snapshotter/tree/v5.0.1/client/config/crd)
#### Volume Snapshot Controller
The CSI external-snapshotter sidecar is split into two controllers:
- A common snapshot controller
- A CSI external-snapshotter sidecar
-The common snapshot controller must be installed only once in the cluster irrespective of the number of CSI drivers installed in the cluster. On OpenShift clusters 4.4 and later, the common snapshot-controller is pre-installed. In the clusters where it is not present, it can be installed using `kubectl` and the manifests are available here: [v4.2.x](https://github.com/kubernetes-csi/external-snapshotter/tree/v4.2.0/deploy/kubernetes/snapshot-controller)
+The common snapshot controller must be installed only once in the cluster irrespective of the number of CSI drivers installed in the cluster. On OpenShift clusters 4.4 and later, the common snapshot-controller is pre-installed. In the clusters where it is not present, it can be installed using `kubectl` and the manifests are available here: [v5.0.x](https://github.com/kubernetes-csi/external-snapshotter/tree/v5.0.1/deploy/kubernetes/snapshot-controller)
*NOTE:*
- The manifests available on GitHub install the snapshotter image:
@@ -98,18 +98,18 @@ You can install CRDs and default snapshot controller by running following comman
git clone https://github.com/kubernetes-csi/external-snapshotter/
cd ./external-snapshotter
git checkout release-
-kubectl create -f client/config/crd
-kubectl create -f deploy/kubernetes/snapshot-controller
+kubectl kustomize client/config/crd | kubectl create -f -
+kubectl -n kube-system kustomize deploy/kubernetes/snapshot-controller | kubectl create -f -
```
*NOTE:*
-- When using Kubernetes 1.20/1.21/1.22 it is recommended to use 4.2.x version of snapshotter/snapshot-controller.
+- When using Kubernetes 1.21/1.22/1.23 it is recommended to use 5.0.x version of snapshotter/snapshot-controller.
- The CSI external-snapshotter sidecar is still installed along with the driver and does not involve any extra configuration.
## Install the Driver
**Steps**
-1. Run `git clone -b v2.1.0 https://github.com/dell/csi-powerflex.git` to clone the git repository.
+1. Run `git clone -b v2.2.0 https://github.com/dell/csi-powerflex.git` to clone the git repository.
2. Ensure that you have created a namespace where you want to install the driver. You can run `kubectl create namespace vxflexos` to create a new one.
@@ -182,7 +182,9 @@ kubectl create -f deploy/kubernetes/snapshot-controller
format and replace/update the secret.
- "insecure" parameter has been changed to "skipCertificateValidation" as insecure is deprecated and will be removed from use in config.yaml or secret.yaml in a future release. Users can continue to use any one of "insecure" or "skipCertificateValidation" for now. The driver would return an error if both parameters are used.
- Please note that log configuration parameters from v1.5 will no longer work in v2.0 and higher. Please refer to the [Dynamic Logging Configuration](../../../features/powerflex#dynamic-logging-configuration) section in Features for more information.
-
+ - If the user is using complex K8s version like "v1.21.3-mirantis-1", use below kubeVersion check in helm/csi-unity/Chart.yaml file.
+ kubeVersion: ">= 1.21.0-0 < 1.24.0-0"
+
5. Default logging options are set during Helm install. To see possible configuration options, see the [Dynamic Logging Configuration](../../../features/powerflex#dynamic-logging-configuration) section in Features.
6. If using automated SDC deployment:
@@ -248,6 +250,7 @@ kubectl create -f deploy/kubernetes/snapshot-controller
- This install script also runs the `verify.sh` script. You will be prompted to enter the credentials for each of the Kubernetes nodes.
The `verify.sh` script needs the credentials to check if SDC has been configured on all nodes.
- It is mandatory to run install script after changes to MDM configuration in `vxflexos-config` secret. Refer [dynamic-array-configuration](../../../features/powerflex#dynamic-array-configuration)
+- If an extended Kubernetes version is being used (e.g. `v1.21.3-mirantis-1`) and is failing the version check in Helm even though it falls in the allowed range, then you must go into `helm/csi-vxflexos/Chart.yaml` and replace the standard `kubeVersion` check with the commented-out alternative. *Please note* that this will also allow the use of pre-release alpha and beta versions of Kubernetes, which is not supported.
- (Optional) Enable additional Mount Options - A user is able to specify additional mount options as needed for the driver.
- Mount options are specified in storageclass yaml under _mkfsFormatOption_.
@@ -255,7 +258,7 @@ kubectl create -f deploy/kubernetes/snapshot-controller
## Certificate validation for PowerFlex Gateway REST API calls
-This topic provides details about setting up the certificate for the CSI Driver for Dell EMC PowerFlex.
+This topic provides details about setting up the certificate for the CSI Driver for Dell PowerFlex.
*Before you begin*
@@ -333,13 +336,10 @@ Deleting a storage class has no impact on a running Pod with mounted PVCs. You c
Starting CSI PowerFlex v1.5, `dell-csi-helm-installer` will not create any Volume Snapshot Class during the driver installation. There is a sample Volume Snapshot Class manifest present in the _samples/_ folder. Please use this sample to create a new Volume Snapshot Class to create Volume Snapshots.
-*NOTE*
-Support for v1beta1 snapshots is being discontinued in this release.
-
### What happens to my existing Volume Snapshot Classes?
-*Upgrading from CSI PowerFlex v2.0 driver*:
+*Upgrading from CSI PowerFlex v2.1 driver*:
The existing volume snapshot class will be retained.
*Upgrading from an older version of the driver*:
-It is strongly recommended to upgrade the earlier versions of CSI PowerFlex to 1.5 or higher, before upgrading to 2.1.
+It is strongly recommended to upgrade the earlier versions of CSI PowerFlex to 1.5 or higher, before upgrading to 2.2.
diff --git a/content/v3/csidriver/installation/helm/powermax.md b/content/v3/csidriver/installation/helm/powermax.md
index 8c79c2077b..ef8882ce05 100644
--- a/content/v3/csidriver/installation/helm/powermax.md
+++ b/content/v3/csidriver/installation/helm/powermax.md
@@ -5,23 +5,25 @@ description: >
Installing CSI Driver for PowerMax via Helm
---
-CSI Driver for Dell EMC PowerMax can be deployed by using the provided Helm v3 charts and installation scripts on both Kubernetes and OpenShift platforms. For more detailed information on the installation scripts, see the script [documentation](https://github.com/dell/csi-powermax/tree/master/dell-csi-helm-installer).
+CSI Driver for Dell PowerMax can be deployed by using the provided Helm v3 charts and installation scripts on both Kubernetes and OpenShift platforms. For more detailed information on the installation scripts, see the script [documentation](https://github.com/dell/csi-powermax/tree/master/dell-csi-helm-installer).
The controller section of the Helm chart installs the following components in a _Deployment_ in the specified namespace:
-- CSI Driver for Dell EMC PowerMax
+- CSI Driver for Dell PowerMax
- Kubernetes External Provisioner, which provisions the volumes
- Kubernetes External Attacher, which attaches the volumes to the containers
- Kubernetes External Snapshotter, which provides snapshot support
- Kubernetes External Resizer, which resizes the volume
-- CSI PowerMax ReverseProxy (optional)
+- (optional) Kubernetes External health monitor, which provides volume health status
+- (optional) CSI PowerMax ReverseProxy, which maximizes CSI driver and Unisphere performance
+- (optional) Dell CSI Replicator, which provides Replication capability.
The node section of the Helm chart installs the following component in a _DaemonSet_ in the specified namespace:
-- CSI Driver for Dell EMC PowerMax
+- CSI Driver for Dell PowerMax
- Kubernetes Node Registrar, which handles the driver registration
## Prerequisites
-The following requirements must be met before installing CSI Driver for Dell EMC PowerMax:
+The following requirements must be met before installing CSI Driver for Dell PowerMax:
- Install Kubernetes or OpenShift (see [supported versions](../../../../csidriver/#features-and-capabilities))
- Install Helm 3
- Fibre Channel requirements
@@ -34,7 +36,7 @@ The following requirements must be met before installing CSI Driver for Dell EMC
### Install Helm 3
-Install Helm 3 on the master node before you install CSI Driver for Dell EMC PowerMax.
+Install Helm 3 on the master node before you install CSI Driver for Dell PowerMax.
**Steps**
@@ -43,23 +45,23 @@ Install Helm 3 on the master node before you install CSI Driver for Dell EMC Pow
### Fibre Channel Requirements
-CSI Driver for Dell EMC PowerMax supports Fibre Channel communication. Ensure that the following requirements are met before you install CSI Driver:
+CSI Driver for Dell PowerMax supports Fibre Channel communication. Ensure that the following requirements are met before you install CSI Driver:
- Zoning of the Host Bus Adapters (HBAs) to the Fibre Channel port director must be completed.
- Ensure that the HBA WWNs (initiators) appear on the list of initiators that are logged into the array.
- If the number of volumes that will be published to nodes is high, then configure the maximum number of LUNs for your HBAs on each node. See the appropriate HBA document to configure the maximum number of LUNs.
### iSCSI Requirements
-The CSI Driver for Dell EMC PowerMax supports iSCSI connectivity. These requirements are applicable for the nodes that use iSCSI initiator to connect to the PowerMax arrays.
+The CSI Driver for Dell PowerMax supports iSCSI connectivity. These requirements are applicable for the nodes that use iSCSI initiator to connect to the PowerMax arrays.
Set up the iSCSI initiators as follows:
- All Kubernetes nodes must have the _iscsi-initiator-utils_ package installed.
- Ensure that the iSCSI initiators are available on all the nodes where the driver node plugin will be installed.
-- Kubernetes nodes should have access (network connectivity) to an iSCSI director on the Dell EMC PowerMax array that has IP interfaces. Manually create IP routes for each node that connects to the Dell EMC PowerMax if required.
-- Ensure that the iSCSI initiators on the nodes are not a part of any existing Host (Initiator Group) on the Dell EMC PowerMax array.
-- The CSI Driver needs the port group names containing the required iSCSI director ports. These port groups must be set up on each Dell EMC PowerMax array. All the port group names supplied to the driver must exist on each Dell EMC PowerMax with the same name.
+- Kubernetes nodes should have access (network connectivity) to an iSCSI director on the Dell PowerMax array that has IP interfaces. Manually create IP routes for each node that connects to the Dell PowerMax if required.
+- Ensure that the iSCSI initiators on the nodes are not a part of any existing Host (Initiator Group) on the Dell PowerMax array.
+- The CSI Driver needs the port group names containing the required iSCSI director ports. These port groups must be set up on each Dell PowerMax array. All the port group names supplied to the driver must exist on each Dell PowerMax with the same name.
-For more information about configuring iSCSI, see [Dell EMC Host Connectivity guide](https://www.delltechnologies.com/asset/zh-tw/products/storage/technical-support/docu5128.pdf).
+For more information about configuring iSCSI, see [Dell Host Connectivity guide](https://www.delltechnologies.com/asset/zh-tw/products/storage/technical-support/docu5128.pdf).
### Certificate validation for Unisphere REST API calls
@@ -80,11 +82,11 @@ If the Unisphere certificate is self-signed or if you are using an embedded Unis
There are no restrictions to how many ports can be present in the iSCSI port groups provided to the driver.
-The same applies to Fibre Channel where there are no restrictions on the number of FA directors a host HBA can be zoned to. See the best practices for host connectivity to Dell EMC PowerMax to ensure that you have multiple paths to your data volumes.
+The same applies to Fibre Channel where there are no restrictions on the number of FA directors a host HBA can be zoned to. See the best practices for host connectivity to Dell PowerMax to ensure that you have multiple paths to your data volumes.
### Linux multipathing requirements
-CSI Driver for Dell EMC PowerMax supports Linux multipathing. Configure Linux multipathing before installing the CSI Driver.
+CSI Driver for Dell PowerMax supports Linux multipathing. Configure Linux multipathing before installing the CSI Driver.
Set up Linux multipathing as follows:
@@ -112,7 +114,7 @@ snapshot:
```
#### Volume Snapshot CRD's
-The Kubernetes Volume Snapshot CRDs can be obtained and installed from the external-snapshotter project on Github. For installation, use [v4.2.x](https://github.com/kubernetes-csi/external-snapshotter/tree/v4.2.0/client/config/crd)
+The Kubernetes Volume Snapshot CRDs can be obtained and installed from the external-snapshotter project on Github. For installation, use [v5.0.x](https://github.com/kubernetes-csi/external-snapshotter/tree/v5.0.1/client/config/crd)
#### Volume Snapshot Controller
The CSI external-snapshotter sidecar is split into two controllers to support Volume snapshots.
@@ -120,7 +122,7 @@ The CSI external-snapshotter sidecar is split into two controllers to support Vo
- A common snapshot controller
- A CSI external-snapshotter sidecar
-The common snapshot controller must be installed only once in the cluster, irrespective of the number of CSI drivers installed in the cluster. On OpenShift clusters 4.4 and later, the common snapshot-controller is pre-installed. In the clusters where it is not present, it can be installed using `kubectl` and the manifests are available here: [v4.2.x](https://github.com/kubernetes-csi/external-snapshotter/tree/v4.2.0/deploy/kubernetes/snapshot-controller)
+The common snapshot controller must be installed only once in the cluster, irrespective of the number of CSI drivers installed in the cluster. On OpenShift clusters 4.4 and later, the common snapshot-controller is pre-installed. In the clusters where it is not present, it can be installed using `kubectl` and the manifests are available here: [v5.0.x](https://github.com/kubernetes-csi/external-snapshotter/tree/v5.0.1/deploy/kubernetes/snapshot-controller)
*NOTE:*
- The manifests available on GitHub install the snapshotter image:
@@ -134,12 +136,12 @@ You can install CRDs and the default snapshot controller by running the followin
git clone https://github.com/kubernetes-csi/external-snapshotter/
cd ./external-snapshotter
git checkout release-
-kubectl create -f client/config/crd
-kubectl create -f deploy/kubernetes/snapshot-controller
+kubectl kustomize client/config/crd | kubectl create -f -
+kubectl -n kube-system kustomize deploy/kubernetes/snapshot-controller | kubectl create -f -
```
*NOTE:*
-- It is recommended to use 4.2.x version of snapshotter/snapshot-controller.
+- It is recommended to use 5.0.x version of snapshotter/snapshot-controller.
- The CSI external-snapshotter sidecar is still installed along with the driver and does not involve any extra configuration.
### (Optional) Replication feature Requirements
@@ -160,7 +162,7 @@ CRDs should be configured during replication prepare stage with repctl as descri
**Steps**
-1. Run `git clone -b v2.1.0 https://github.com/dell/csi-powermax.git` to clone the git repository. This will include the Helm charts and dell-csi-helm-installer scripts.
+1. Run `git clone -b v2.2.0 https://github.com/dell/csi-powermax.git` to clone the git repository. This will include the Helm charts and dell-csi-helm-installer scripts.
2. Ensure that you have created a namespace where you want to install the driver. You can run `kubectl create namespace powermax` to create a new one
3. Edit the `samples/secret/secret.yaml file, point to the correct namespace, and replace the values for the username and password parameters.
These values can be obtained using base64 encoding as described in the following example:
@@ -192,11 +194,14 @@ CRDs should be configured during replication prepare stage with repctl as descri
| snapshot.enabled | Enable/Disable volume snapshot feature | Yes | true |
| snapshot.snapNamePrefix | Defines a string prefix for the names of the Snapshots created | Yes | "snapshot" |
| resizer.enabled | Enable/Disable volume expansion feature | Yes | true |
+| healthMonitor.enabled | Allows to enable/disable volume health monitor | No | false |
+| healthMonitor.interval | Interval of monitoring volume health condition | No | 60s |
| nodeSelector | Define node selection constraints for pods of controller deployment | No | |
| tolerations | Define tolerations for the controller deployment, if required | No | |
| **node** | Allows configuration of the node-specific parameters.| - | - |
| tolerations | Add tolerations as per requirement | No | - |
| nodeSelector | Add node selectors as per requirement | No | - |
+| healthMonitor.enabled | Allows to enable/disable volume health monitor | No | false |
| **global**| This section refers to configuration options for both CSI PowerMax Driver and Reverse Proxy | - | - |
|defaultCredentialsSecret| This secret name refers to:
1. The Unisphere credentials if the driver is installed without proxy or with proxy in Linked mode.
2. The proxy credentials if the driver is installed with proxy in StandAlone mode.
3. The default Unisphere credentials if credentialsSecret is not specified for a management server.| Yes | powermax-creds |
| storageArrays| This section refers to the list of arrays managed by the driver and Reverse Proxy in StandAlone mode.| - | - |
@@ -250,11 +255,11 @@ Starting with CSI PowerMax v1.7, `dell-csi-helm-installer` will not create any V
### What happens to my existing Volume Snapshot Classes?
-*Upgrading from CSI PowerMax v2.0 driver*:
+*Upgrading from CSI PowerMax v2.1 driver*:
The existing volume snapshot class will be retained.
*Upgrading from an older version of the driver*:
-It is strongly recommended to upgrade the earlier versions of CSI PowerMax to 1.7 or higher, before upgrading to 2.1.
+It is strongly recommended to upgrade the earlier versions of CSI PowerMax to 1.7 or higher, before upgrading to 2.2.
## Sample values file
The following sections have useful snippets from `values.yaml` file which provides more information on how to configure the CSI PowerMax driver along with CSI PowerMax ReverseProxy in various modes
diff --git a/content/v3/csidriver/installation/helm/powerstore.md b/content/v3/csidriver/installation/helm/powerstore.md
index 868d7c27cd..7b009d83a4 100644
--- a/content/v3/csidriver/installation/helm/powerstore.md
+++ b/content/v3/csidriver/installation/helm/powerstore.md
@@ -4,26 +4,26 @@ description: >
Installing CSI Driver for PowerStore via Helm
---
-The CSI Driver for Dell EMC PowerStore can be deployed by using the provided Helm v3 charts and installation scripts on both Kubernetes and OpenShift platforms. For more detailed information on the installation scripts, review the script [documentation](https://github.com/dell/csi-powerstore/tree/master/dell-csi-helm-installer).
+The CSI Driver for Dell PowerStore can be deployed by using the provided Helm v3 charts and installation scripts on both Kubernetes and OpenShift platforms. For more detailed information on the installation scripts, review the script [documentation](https://github.com/dell/csi-powerstore/tree/master/dell-csi-helm-installer).
The controller section of the Helm chart installs the following components in a _Deployment_ in the specified namespace:
-- CSI Driver for Dell EMC PowerStore
+- CSI Driver for Dell PowerStore
- Kubernetes External Provisioner, which provisions the volumes
- Kubernetes External Attacher, which attaches the volumes to the containers
- (Optional) Kubernetes External Snapshotter, which provides snapshot support
-- Kubernetes External Resizer, which resizes the volume
+- (Optional) Kubernetes External Resizer, which resizes the volume
The node section of the Helm chart installs the following component in a _DaemonSet_ in the specified namespace:
-- CSI Driver for Dell EMC PowerStore
+- CSI Driver for Dell PowerStore
- Kubernetes Node Registrar, which handles the driver registration
## Prerequisites
-The following are requirements to be met before installing the CSI Driver for Dell EMC PowerStore:
+The following are requirements to be met before installing the CSI Driver for Dell PowerStore:
- Install Kubernetes or OpenShift (see [supported versions](../../../../csidriver/#features-and-capabilities))
- Install Helm 3
-- If you plan to use either the Fibre Channel or iSCSI protocol, refer to either _Fibre Channel requirements_ or _Set up the iSCSI Initiator_ sections below. You can use NFS volumes without FC or iSCSI configuration.
-> You can use either the Fibre Channel or iSCSI protocol, but you do not need both.
+- If you plan to use either the Fibre Channel or iSCSI or NVMe/TCP protocol, refer to either _Fibre Channel requirements_ or _Set up the iSCSI Initiator_ or _Set up the NVMe/TCP Initiator_ sections below. You can use NFS volumes without FC or iSCSI or NVMe/TCP configuration.
+> You can use either the Fibre Channel or iSCSI or NVMe/TCP protocol, but you do not need all the three.
> If you want to use preconfigured iSCSI/FC hosts be sure to check that they are not part of any host group
- Linux native multipathing requirements
@@ -35,7 +35,7 @@ The following are requirements to be met before installing the CSI Driver for De
### Install Helm 3.0
-Install Helm 3.0 on the master node before you install the CSI Driver for Dell EMC PowerStore.
+Install Helm 3.0 on the master node before you install the CSI Driver for Dell PowerStore.
**Steps**
@@ -43,26 +43,39 @@ Install Helm 3.0 on the master node before you install the CSI Driver for Dell E
### Fibre Channel requirements
-Dell EMC PowerStore supports Fibre Channel communication. If you use the Fibre Channel protocol, ensure that the
-following requirement is met before you install the CSI Driver for Dell EMC PowerStore:
+Dell PowerStore supports Fibre Channel communication. If you use the Fibre Channel protocol, ensure that the
+following requirement is met before you install the CSI Driver for Dell PowerStore:
- Zoning of the Host Bus Adapters (HBAs) to the Fibre Channel port must be done.
### Set up the iSCSI Initiator
-The CSI Driver for Dell EMC PowerStore v1.4 and higher supports iSCSI connectivity.
+The CSI Driver for Dell PowerStore v1.4 and higher supports iSCSI connectivity.
If you use the iSCSI protocol, set up the iSCSI initiators as follows:
- Ensure that the iSCSI initiators are available on both Controller and Worker nodes.
-- Kubernetes nodes must have access (network connectivity) to an iSCSI port on the Dell EMC PowerStore array that
-has IP interfaces. Manually create IP routes for each node that connects to the Dell EMC PowerStore.
+- Kubernetes nodes must have access (network connectivity) to an iSCSI port on the Dell PowerStore array that
+has IP interfaces. Manually create IP routes for each node that connects to the Dell PowerStore.
- All Kubernetes nodes must have the _iscsi-initiator-utils_ package for CentOS/RHEL or _open-iscsi_ package for Ubuntu installed, and the _iscsid_ service must be enabled and running.
To do this, run the `systemctl enable --now iscsid` command.
- Ensure that the unique initiator name is set in _/etc/iscsi/initiatorname.iscsi_.
-For information about configuring iSCSI, see _Dell EMC PowerStore documentation_ on Dell EMC Support.
+For information about configuring iSCSI, see _Dell PowerStore documentation_ on Dell Support.
+
+
+### Set up the NVMe/TCP Initiator
+
+If you want to use the protocol, set up the NVMe/TCP initiators as follows:
+- The driver requires NVMe management command-line interface (nvme-cli) to use configure, edit, view or start the NVMe client and target. The nvme-cli utility provides a command-line and interactive shell option. The NVMe CLI tool is installed in the host using the below command.
+`sudo apt install nvme-cli`
+
+- Modules including the nvme, nvme_core, nvme_fabrics, and nvme_tcp are required for using NVMe over Fabrics using TCP. Load the NVMe and NVMe-OF Modules using the below commands:
+```bash
+modprobe nvme
+modprobe nvme_tcp
+```
### Linux multipathing requirements
-Dell EMC PowerStore supports Linux multipathing. Configure Linux multipathing before installing the CSI Driver for Dell EMC
+Dell PowerStore supports Linux multipathing. Configure Linux multipathing before installing the CSI Driver for Dell
PowerStore.
Set up Linux multipathing as follows:
@@ -82,7 +95,7 @@ snapshot:
```
#### Volume Snapshot CRD's
-The Kubernetes Volume Snapshot CRDs can be obtained and installed from the external-snapshotter project on Github. Use [v4.2.x](https://github.com/kubernetes-csi/external-snapshotter/tree/v4.2.0/client/config/crd) for the installation.
+The Kubernetes Volume Snapshot CRDs can be obtained and installed from the external-snapshotter project on Github. Use [v5.0.x](https://github.com/kubernetes-csi/external-snapshotter/tree/v5.0.1/client/config/crd) for the installation.
#### Volume Snapshot Controller
The CSI external-snapshotter sidecar is split into two controllers:
@@ -90,13 +103,45 @@ The CSI external-snapshotter sidecar is split into two controllers:
- A CSI external-snapshotter sidecar
The common snapshot controller must be installed only once in the cluster irrespective of the number of CSI drivers installed in the cluster. On OpenShift clusters 4.4 and later, the common snapshot-controller is pre-installed. In the clusters where it is not present, it can be installed using `kubectl` and the manifests are available:
-Use [v4.2.x](https://github.com/kubernetes-csi/external-snapshotter/tree/v4.2.0/deploy/kubernetes/snapshot-controller) for the installation.
+Use [v5.0.x](https://github.com/kubernetes-csi/external-snapshotter/tree/v5.0.1/deploy/kubernetes/snapshot-controller) for the installation.
*NOTE:*
- The manifests available on GitHub install the snapshotter image:
- [quay.io/k8scsi/csi-snapshotter:v4.0.x](https://quay.io/repository/k8scsi/csi-snapshotter?tag=v4.0.0&tab=tags)
- The CSI external-snapshotter sidecar is still installed along with the driver and does not involve any extra configuration.
+## Volume Health Monitoring
+
+Volume Health Monitoring feature is optional and by default this feature is disabled for drivers when installed via helm.
+To enable this feature, add the below block to the driver manifest before installing the driver. This ensures to install external
+health monitor sidecar. To get the volume health state value under controller should be set to true as seen below. To get the
+volume stats value under node should be set to true.
+ ```yaml
+controller:
+ healthMonitor:
+ # enabled: Enable/Disable health monitor of CSI volumes
+ # Allowed values:
+ # true: enable checking of health condition of CSI volumes
+ # false: disable checking of health condition of CSI volumes
+ # Default value: None
+ enabled: false
+
+ # healthMonitorInterval: Interval of monitoring volume health condition
+ # Allowed values: Number followed by unit (s,m,h)
+ # Examples: 60s, 5m, 1h
+ # Default value: 60s
+ volumeHealthMonitorInterval: 60s
+
+node:
+ healthMonitor:
+ # enabled: Enable/Disable health monitor of CSI volumes- volume usage, volume condition
+ # Allowed values:
+ # true: enable checking of health condition of CSI volumes
+ # false: disable checking of health condition of CSI volumes
+ # Default value: None
+ enabled: false
+ ```
+
#### Installation example
You can install CRDs and default snapshot controller by running following commands:
@@ -104,12 +149,12 @@ You can install CRDs and default snapshot controller by running following comman
git clone https://github.com/kubernetes-csi/external-snapshotter/
cd ./external-snapshotter
git checkout release-
-kubectl create -f client/config/crd
-kubectl create -f deploy/kubernetes/snapshot-controller
+kubectl kustomize client/config/crd | kubectl create -f -
+kubectl -n kube-system kustomize deploy/kubernetes/snapshot-controller | kubectl create -f -
```
*NOTE:*
-- It is recommended to use 4.2.x version of snapshotter/snapshot-controller.
+- It is recommended to use 5.0.x version of snapshotter/snapshot-controller.
- The CSI external-snapshotter sidecar is installed along with the driver and does not involve any extra configuration.
### (Optional) Replication feature Requirements
@@ -129,7 +174,7 @@ CRDs should be configured during replication prepare stage with repctl as descri
## Install the Driver
**Steps**
-1. Run `git clone -b v2.1.0 https://github.com/dell/csi-powerstore.git` to clone the git repository.
+1. Run `git clone -b v2.2.0 https://github.com/dell/csi-powerstore.git` to clone the git repository.
2. Ensure that you have created namespace where you want to install the driver. You can run `kubectl create namespace csi-powerstore` to create a new one. "csi-powerstore" is just an example. You can choose any name for the namespace.
But make sure to align to the same namespace during the whole installation.
3. Check `helm/csi-powerstore/driver-image.yaml` and confirm the driver image points to new image.
@@ -139,8 +184,10 @@ CRDs should be configured during replication prepare stage with repctl as descri
- *username*, *password*: defines credentials for connecting to array.
- *skipCertificateValidation*: defines if we should use insecure connection or not.
- *isDefault*: defines if we should treat the current array as a default.
- - *blockProtocol*: defines what SCSI transport protocol we should use (FC, ISCSI, None, or auto).
+ - *blockProtocol*: defines what transport protocol we should use (FC, ISCSI, NVMeTCP, None, or auto).
- *nasName*: defines what NAS should be used for NFS volumes.
+ - *nfsAcls* (Optional): defines permissions - POSIX mode bits or NFSv4 ACLs, to be set on NFS target mount directory.
+ NFSv4 ACls are supported for NFSv4 shares on NFSv4 enabled NAS servers only. POSIX ACLs are not supported and only POSIX mode bits are supported for NFSv3 shares.
Add more blocks similar to above for each PowerStore array if necessary.
5. Create storage classes using ones from `samples/storageclass` folder as an example and apply them to the Kubernetes cluster by running `kubectl create -f `
@@ -157,6 +204,7 @@ CRDs should be configured during replication prepare stage with repctl as descri
| externalAccess | Defines additional entries for hostAccess of NFS volumes, single IP address and subnet are valid entries | No | " " |
| kubeletConfigDir | Defines kubelet config path for cluster | Yes | "/var/lib/kubelet" |
| imagePullPolicy | Policy to determine if the image should be pulled prior to starting the container. | Yes | "IfNotPresent" |
+| nfsAcls | Defines permissions - POSIX mode bits or NFSv4 ACLs, to be set on NFS target mount directory. | No | "0777" |
| connection.enableCHAP | Defines whether the driver should use CHAP for iSCSI connections or not | No | False |
| controller.controllerCount | Defines number of replicas of controller deployment | Yes | 2 |
| controller.volumeNamePrefix | Defines the string added to each volume that the CSI driver creates | No | "csivol" |
@@ -172,6 +220,7 @@ CRDs should be configured during replication prepare stage with repctl as descri
| node.healthMonitor.enabled | Allows to enable/disable volume health monitor | No | false |
| node.nodeSelector | Defines what nodes would be selected for pods of node daemonset | Yes | " " |
| node.tolerations | Defines tolerations that would be applied to node daemonset | Yes | " " |
+| fsGroupPolicy | Defines which FS Group policy mode to be used, Supported modes `None, File and ReadWriteOnceWithFSType` | No | "ReadWriteOnceWithFSType" |
8. Install the driver using `csi-install.sh` bash script by running `./csi-install.sh --namespace csi-powerstore --values ./my-powerstore-settings.yaml`
- After that the driver should be installed, you can check the condition of driver pods by running `kubectl get all -n csi-powerstore`
@@ -187,7 +236,7 @@ CRDs should be configured during replication prepare stage with repctl as descri
## Storage Classes
-The CSI driver for Dell EMC PowerStore version 1.3 and later, `dell-csi-helm-installer` does not create any storage classes as part of the driver installation. A wide set of annotated storage class manifests have been provided in the `samples/storageclass` folder. Use these samples to create new storage classes to provision storage.
+The CSI driver for Dell PowerStore version 1.3 and later, `dell-csi-helm-installer` does not create any storage classes as part of the driver installation. A wide set of annotated storage class manifests have been provided in the `samples/storageclass` folder. Use these samples to create new storage classes to provision storage.
### What happens to my existing storage classes?
@@ -201,13 +250,14 @@ There are samples storage class yaml files available under `samples/storageclass
1. Edit the sample storage class yaml file and update following parameters:
- *arrayID*: specifies what storage cluster the driver should use, if not specified driver will use storage cluster specified as `default` in `samples/secret/secret.yaml`
-- *FsType*: specifies what filesystem type driver should use, possible variants `ext4`, `xfs`, `nfs`, if not specified driver will use `ext4` by default.
+- *FsType*: specifies what filesystem type driver should use, possible variants `ext3`, `ext4`, `xfs`, `nfs`, if not specified driver will use `ext4` by default.
+- *nfsAcls* (Optional): defines permissions - POSIX mode bits or NFSv4 ACLs, to be set on NFS target mount directory.
- *allowedTopologies* (Optional): If you want you can also add topology constraints.
```yaml
allowedTopologies:
- matchLabelExpressions:
- key: csi-powerstore.dellemc.com/12.34.56.78-iscsi
-# replace "-iscsi" with "-fc" or "-nfs" at the end to use FC or NFS enabled hosts
+# replace "-iscsi" with "-fc", "-nvme" or "-nfs" at the end to use FC, NVMe or NFS enabled hosts
# replace "12.34.56.78" with PowerStore endpoint IP
values:
- "true"
@@ -226,11 +276,11 @@ Starting CSI PowerStore v1.4, `dell-csi-helm-installer` will not create any Volu
### What happens to my existing Volume Snapshot Classes?
-*Upgrading from CSI PowerStore v2.0 driver*:
+*Upgrading from CSI PowerStore v2.1 driver*:
The existing volume snapshot class will be retained.
*Upgrading from an older version of the driver*:
-It is strongly recommended to upgrade the earlier versions of CSI PowerStore to 1.4 or higher, before upgrading to 2.1.
+It is strongly recommended to upgrade the earlier versions of CSI PowerStore to 1.4 or higher, before upgrading to 2.2.
## Dynamically update the powerstore secrets
@@ -253,4 +303,4 @@ cd dell-csi-helm-installer
./csi-install.sh --namespace csi-powerstore --values ./my-powerstore-settings.yaml --upgrade
```
-Note: here `my-powerstore-settings.yaml` is a `values.yaml` file which user has used for driver installation.
\ No newline at end of file
+Note: here `my-powerstore-settings.yaml` is a `values.yaml` file which user has used for driver installation.
diff --git a/content/v3/csidriver/installation/helm/unity.md b/content/v3/csidriver/installation/helm/unity.md
index 1c7c5122fc..0db49246f5 100644
--- a/content/v3/csidriver/installation/helm/unity.md
+++ b/content/v3/csidriver/installation/helm/unity.md
@@ -4,7 +4,7 @@ description: >
Installing CSI Driver for Unity via Helm
---
-The CSI Driver for Dell EMC Unity can be deployed by using the provided Helm v3 charts and installation scripts on both Kubernetes and OpenShift platforms. For more detailed information on the installation scripts, review the script [documentation](https://github.com/dell/csi-unity/tree/master/dell-csi-helm-installer).
+The CSI Driver for Dell Unity can be deployed by using the provided Helm v3 charts and installation scripts on both Kubernetes and OpenShift platforms. For more detailed information on the installation scripts, review the script [documentation](https://github.com/dell/csi-unity/tree/master/dell-csi-helm-installer).
The controller section of the Helm chart installs the following components in a _Deployment_:
@@ -13,6 +13,7 @@ The controller section of the Helm chart installs the following components in a
- Kubernetes External Attacher, which attaches the volumes to the containers
- Kubernetes External Snapshotter, which provides snapshot support
- Kubernetes External Resizer, which resizes the volume
+- Kubernetes External Health Monitor, which provides volume health status
The node section of the Helm chart installs the following component in a _DaemonSet_:
@@ -38,7 +39,7 @@ Install CSI Driver for Unity using this procedure.
*Before you begin*
- * You must have the downloaded files, including the Helm chart from the source [git repository](https://github.com/dell/csi-unity) with the command ```git clone -b v2.1.0 https://github.com/dell/csi-unity.git```, as a pre-requisite for running this procedure.
+ * You must have the downloaded files, including the Helm chart from the source [git repository](https://github.com/dell/csi-unity) with the command ```git clone -b v2.2.0 https://github.com/dell/csi-unity.git```, as a pre-requisite for running this procedure.
* In the top-level dell-csi-helm-installer directory, there should be two scripts, `csi-install.sh` and `csi-uninstall.sh`.
* Ensure _unity_ namespace exists in Kubernetes cluster. Use the `kubectl create namespace unity` command to create the namespace if the namespace is not present.
@@ -51,6 +52,8 @@ Procedure
**Note**:
* ArrayId corresponds to the serial number of Unity array.
* Unity Array username must have role as Storage Administrator to be able to perform CRUD operations.
+ * If the user is using complex K8s version like "v1.21.3-mirantis-1", use below kubeVersion check in helm/csi-unity/Chart.yaml file.
+ kubeVersion: ">= 1.21.0-0 < 1.24.0-0"
2. Copy the `helm/csi-unity/values.yaml` into a file named `myvalues.yaml` in the same directory of `csi-install.sh`, to customize settings for installation.
@@ -64,12 +67,13 @@ Procedure
| logLevel | LogLevel is used to set the logging level of the driver | true | info |
| allowRWOMultiPodAccess | Flag to enable multiple pods to use the same PVC on the same node with RWO access mode. | false | false |
| kubeletConfigDir | Specify kubelet config dir path | Yes | /var/lib/kubelet |
- | syncNodeInfoInterval | Time interval to add node info to the array. Default 15 minutes. The minimum value should be 1 minute. | false | 15 |
+ | syncNodeInfoInterval | Time interval to add node info to the array. Default 15 minutes. The minimum value should be 1 minute. | false | 15 |
| maxUnityVolumesPerNode | Maximum number of volumes that controller can publish to the node. | false | 0 |
| certSecretCount | Represents the number of certificate secrets, which the user is going to create for SSL authentication. (unity-cert-0..unity-cert-n). The minimum value should be 1. | false | 1 |
| imagePullPolicy | The default pull policy is IfNotPresent which causes the Kubelet to skip pulling an image if it already exists. | Yes | IfNotPresent |
| podmon.enabled | service to monitor failing jobs and notify | false | - |
| podmon.image| pod man image name | false | - |
+ | tenantName | Tenant name added while adding host entry to the array | No | |
| **controller** | Allows configuration of the controller-specific parameters.| - | - |
| controllerCount | Defines the number of csi-unity controller pods to deploy to the Kubernetes release| Yes | 2 |
| volumeNamePrefix | Defines a string prefix for the names of PersistentVolumes created | Yes | "k8s" |
@@ -78,13 +82,13 @@ Procedure
| resizer.enabled | Enable/Disable volume expansion feature | Yes | true |
| nodeSelector | Define node selection constraints for pods of controller deployment | No | |
| tolerations | Define tolerations for the controller deployment, if required | No | |
- | volumeHealthMonitor.enabled | Enable/Disable deployment of external health monitor sidecar for controller side volume health monitoring. | No | false |
- | volumeHealthMonitor.interval | Interval of monitoring volume health condition. Allowed values: Number followed by unit (s,m,h) | No | 60s |
+ | healthMonitor.enabled | Enable/Disable deployment of external health monitor sidecar for controller side volume health monitoring. | No | false |
+ | healthMonitor.interval | Interval of monitoring volume health condition. Allowed values: Number followed by unit (s,m,h) | No | 60s |
| ***node*** | Allows configuration of the node-specific parameters.| - | - |
- | tolerations | Define tolerations for the node daemonset, if required | No | |
| dnsPolicy | Define the DNS Policy of the Node service | Yes | ClusterFirstWithHostNet |
- | volumeHealthMonitor.enabled | Enable/Disable health monitor of CSI volumes- volume usage, volume condition | No | false |
- | tenantName | Tenant name added while adding host entry to the array | No | |
+ | healthMonitor.enabled | Enable/Disable health monitor of CSI volumes- volume usage, volume condition | No | false |
+ | nodeSelector | Define node selection constraints for pods of node deployment | No | |
+ | tolerations | Define tolerations for the node deployment, if required | No | |
**Note**:
@@ -118,19 +122,19 @@ Procedure
maxUnityVolumesPerNode: 0
```
-4. For certificate validation of Unisphere REST API calls refer [here](#certificate-validation-for-unisphere-rest-api-calls). Otherwise, create an empty secret with file `helm/emptysecret.yaml` file by running the `kubectl create -f helm/emptysecret.yaml` command.
+4. For certificate validation of Unisphere REST API calls refer [here](#certificate-validation-for-unisphere-rest-api-calls). Otherwise, create an empty secret with file `csi-unity/samples/secret/emptysecret.yaml` file by running the `kubectl create -f csi-unity/samples/secret/emptysecret.yaml` command.
5. Prepare the `secret.yaml` for driver configuration.
The following table lists driver configuration parameters for multiple storage arrays.
- | Parameter | Description | Required | Default |
- | --------- | ----------- | -------- |-------- |
- | storageArrayList.username | Username for accessing Unity system | true | - |
- | storageArrayList.password | Password for accessing Unity system | true | - |
- | storageArrayList.endpoint | REST API gateway HTTPS endpoint Unity system| true | - |
- | storageArrayList.arrayId | ArrayID for Unity system | true | - |
+ | Parameter | Description | Required | Default |
+ | ------------------------- | ----------------------------------- | -------- |-------- |
+ | storageArrayList.username | Username for accessing Unity system | true | - |
+ | storageArrayList.password | Password for accessing Unity system | true | - |
+ | storageArrayList.endpoint | REST API gateway HTTPS endpoint Unity system| true | - |
+ | storageArrayList.arrayId | ArrayID for Unity system | true | - |
| storageArrayList.skipCertificateValidation | "skipCertificateValidation " determines if the driver is going to validate unisphere certs while connecting to the Unisphere REST API interface. If it is set to false, then a secret unity-certs has to be created with an X.509 certificate of CA which signed the Unisphere certificate. | true | true |
- | storageArrayList.isDefault | An array having isDefault=true or isDefaultArray=true will be considered as the default array when arrayId is not specified in the storage class. This parameter should occur only once in the list. | false | false |
+ | storageArrayList.isDefault| An array having isDefault=true or isDefaultArray=true will be considered as the default array when arrayId is not specified in the storage class. This parameter should occur only once in the list. | true | - |
Example: secret.yaml
@@ -182,7 +186,7 @@ Procedure
```
**Note:**
- * Parameters "allowRWOMultiPodAccess" and "syncNodeInfoTimeInterval" have been enabled for configuration in values.yaml and this helps users to dynamically change these values without the need for driver re-installation.
+ * Parameters "allowRWOMultiPodAccess" and "syncNodeInfoInterval" have been enabled for configuration in values.yaml and this helps users to dynamically change these values without the need for driver re-installation.
6. Setup for snapshots.
@@ -197,19 +201,14 @@ Procedure
In order to use the Kubernetes Volume Snapshot feature, you must ensure the following components have been deployed on your Kubernetes cluster
#### Volume Snapshot CRD's
- The Kubernetes Volume Snapshot CRDs can be obtained and installed from the external-snapshotter project on Github. Use [v4.2.x](https://github.com/kubernetes-csi/external-snapshotter/tree/v4.2.0/client/config/crd) for the installation.
+ The Kubernetes Volume Snapshot CRDs can be obtained and installed from the external-snapshotter project on Github. Use [v5.0.x](https://github.com/kubernetes-csi/external-snapshotter/tree/v5.0.1/client/config/crd) for the installation.
#### Volume Snapshot Controller
The CSI external-snapshotter sidecar is split into two controllers:
- A common snapshot controller
- A CSI external-snapshotter sidecar
- Use [v4.2.x](https://github.com/kubernetes-csi/external-snapshotter/tree/v4.2.0/deploy/kubernetes/snapshot-controller) for the installation.
-
- **Note**:
- - The manifests available on GitHub install the snapshotter image:
- - [quay.io/k8scsi/csi-snapshotter:v4.0.x](https://quay.io/repository/k8scsi/csi-snapshotter?tag=v4.0.0&tab=tags)
- - The CSI external-snapshotter sidecar is still installed along with the driver and does not involve any extra configuration.
+ Use [v5.0.x](https://github.com/kubernetes-csi/external-snapshotter/tree/v5.0.1/deploy/kubernetes/snapshot-controller) for the installation.
#### Installation example
@@ -218,12 +217,12 @@ Procedure
git clone https://github.com/kubernetes-csi/external-snapshotter/
cd ./external-snapshotter
git checkout release-
- kubectl create -f client/config/crd
- kubectl create -f deploy/kubernetes/snapshot-controller
+ kubectl kustomize client/config/crd | kubectl create -f -
+ kubectl -n kube-system kustomize deploy/kubernetes/snapshot-controller | kubectl create -f -
```
**Note**:
- - It is recommended to use 4.2.x version of snapshotter/snapshot-controller.
+ - It is recommended to use 5.0.x version of snapshotter/snapshot-controller.
- The CSI external-snapshotter sidecar is still installed along with the driver and does not involve any extra configuration.
@@ -233,7 +232,7 @@ Procedure
A successful installation must display messages that look similar to the following samples:
```
------------------------------------------------------
- > Installing CSI Driver: csi-unity on 1.20
+ > Installing CSI Driver: csi-unity on 1.22
------------------------------------------------------
------------------------------------------------------
> Checking to see if CSI Driver is already installed
@@ -241,52 +240,52 @@ Procedure
------------------------------------------------------
> Verifying Kubernetes and driver configuration
------------------------------------------------------
- |- Kubernetes Version: 1.20
+ |- Kubernetes Version: 1.22
|
|- Driver: csi-unity
|
- |- Verifying Kubernetes versions
- |
- |--> Verifying minimum Kubernetes version Success
- |
- |--> Verifying maximum Kubernetes version Success
+ |- Verifying Kubernetes version
|
- |- Verifying that required namespaces have been created Success
+ |--> Verifying minimum Kubernetes version Success
|
- |- Verifying that required secrets have been created Success
+ |--> Verifying maximum Kubernetes version Success
|
- |- Verifying that required secrets have been created Success
+ |- Verifying that required namespaces have been created Success
+ |
+ |- Verifying that required secrets have been created Success
+ |
+ |- Verifying that optional secrets have been created Success
|
|- Verifying alpha snapshot resources
- |
- |--> Verifying that alpha snapshot CRDs are not installed Success
+ |
+ |--> Verifying that alpha snapshot CRDs are not installed Success
|
|- Verifying sshpass installation.. |
|- Verifying iSCSI installation
- Enter the root password of 10.**.**.**:
+ Enter the root password of 10.**.**.**:
- Enter the root password of 10.**.**.**:
+ Enter the root password of 10.**.**.**:
Success
|
|- Verifying snapshot support
- |
- |--> Verifying that snapshot CRDs are available Success
- |
- |--> Verifying that the snapshot controller is available Success
|
- |- Verifying helm version Success
+ |--> Verifying that snapshot CRDs are available Success
+ |
+ |--> Verifying that the snapshot controller is available Success
|
- |- Verifying helm values version Success
+ |- Verifying helm version Success
+ |
+ |- Verifying helm values version Success
------------------------------------------------------
> Verification Complete - Success
------------------------------------------------------
|
- |- Installing Driver Success
- |
- |--> Waiting for Deployment unity-controller to be ready Success
- |
- |--> Waiting for DaemonSet unity-node to be ready Success
+ |- Installing Driver Success
+ |
+ |--> Waiting for Deployment unity-controller to be ready Success
+ |
+ |--> Waiting for DaemonSet unity-node to be ready Success
------------------------------------------------------
> Operation complete
------------------------------------------------------
@@ -301,7 +300,7 @@ Procedure
## Certificate validation for Unisphere REST API calls
-This topic provides details about setting up the certificate validation for the CSI Driver for Dell EMC Unity.
+This topic provides details about setting up the certificate validation for the CSI Driver for Dell Unity.
*Before you begin*
@@ -339,11 +338,11 @@ For CSI Driver for Unity version 1.6 and later, `dell-csi-helm-installer` does n
### What happens to my existing Volume Snapshot Classes?
-*Upgrading from CSI Unity v2.0 driver*:
+*Upgrading from CSI Unity v2.1 driver*:
The existing volume snapshot class will be retained.
*Upgrading from an older version of the driver*:
-It is strongly recommended to upgrade the earlier versions of CSI Unity to 1.6 or higher, before upgrading to 2.1.
+It is strongly recommended to upgrade the earlier versions of CSI Unity to 1.6 or higher, before upgrading to 2.2.
## Storage Classes
@@ -360,7 +359,7 @@ Upgrading from an older version of the driver: The storage classes will be delet
>Note: If you continue to use the old storage classes, you may not be able to take advantage of any new storage class parameter supported by the driver.
**Steps to create storage class:**
-There are samples storage class yaml files available under `helm/samples/storageclass`. These can be copied and modified as needed.
+There are samples storage class yaml files available under `csi-unity/samples/storageclass`. These can be copied and modified as needed.
1. Pick any of `unity-fc.yaml`, `unity-iscsi.yaml` or `unity-nfs.yaml`
2. Copy the file as `unity--fc.yaml`, `unity--iscsi.yaml` or `unity--nfs.yaml`
diff --git a/content/v3/csidriver/installation/offline/_index.md b/content/v3/csidriver/installation/offline/_index.md
index a6dd5941fa..59a7c082f3 100644
--- a/content/v3/csidriver/installation/offline/_index.md
+++ b/content/v3/csidriver/installation/offline/_index.md
@@ -1,10 +1,10 @@
---
-title: Offline Installation of Dell EMC CSI Storage Providers
+title: Offline Installation of Dell CSI Storage Providers
linktitle: Offline Installer
-description: Offline Installation of Dell EMC CSI Storage Providers
+description: Offline Installation of Dell CSI Storage Providers
---
-The `csi-offline-bundle.sh` script can be used to create a package usable for offline installation of the Dell EMC CSI Storage Providers, via either Helm
+The `csi-offline-bundle.sh` script can be used to create a package usable for offline installation of the Dell CSI Storage Providers, via either Helm
or the Dell CSI Operator.
This includes the following drivers:
@@ -43,6 +43,8 @@ To perform an offline installation of a driver or the Operator, the following st
2. Unpacking the offline bundle created in Step 1 and preparing for installation
3. Perform either a Helm installation or Operator installation using the files obtained after unpacking in Step 2
+**NOTE:** It is recommended to use the same build tool for packing and unpacking of images (either docker or podman).
+
### Building an offline bundle
This needs to be performed on a Linux system with access to the internet as a git repo will need to be cloned, and container images pulled from public registries.
@@ -63,84 +65,73 @@ The resulting offline bundle file can be copied to another machine, if necessary
For example, here is the output of a request to build an offline bundle for the Dell CSI Operator:
```
-[user@anothersystem /home/user]# git clone https://github.com/dell/dell-csi-operator.git
+git clone https://github.com/dell/dell-csi-operator.git
```
```
-[user@anothersystem /home/user]# cd dell-csi-operator
+cd dell-csi-operator
```
```
-[user@system /home/user/dell-csi-operator]# scripts/csi-offline-bundle.sh -c
-*
-* Building image manifest file
+[root@user scripts]# ./csi-offline-bundle.sh -c
*
-* Pulling container images
-
- dellemc/csi-isilon:v1.4.0.000R
- dellemc/csi-isilon:v1.5.0
- dellemc/csi-isilon:v1.6.0
- dellemc/csipowermax-reverseproxy:v1.3.0
- dellemc/csi-powermax:v1.5.0.000R
- dellemc/csi-powermax:v1.6.0
- dellemc/csi-powermax:v1.7.0
- dellemc/csi-powerstore:v1.2.0.000R
- dellemc/csi-powerstore:v1.3.0
- dellemc/csi-powerstore:v1.4.0
- dellemc/csi-unity:v1.4.0.000R
- dellemc/csi-unity:v1.5.0
- dellemc/csi-unity:v1.6.0
- dellemc/csi-vxflexos:v1.3.0.000R
- dellemc/csi-vxflexos:v1.4.0
- dellemc/csi-vxflexos:v1.5.0
- dellemc/dell-csi-operator:v1.4.0
+* Pulling and saving container images
+
+ dellemc/csi-isilon:v2.0.0
+ dellemc/csi-isilon:v2.1.0
+ dellemc/csipowermax-reverseproxy:v1.4.0
+ dellemc/csi-powermax:v2.0.0
+ dellemc/csi-powermax:v2.1.0
+ dellemc/csi-powerstore:v2.0.0
+ dellemc/csi-powerstore:v2.1.0
+ dellemc/csi-unity:v2.0.0
+ dellemc/csi-unity:v2.1.0
+ localregistry:5028/csi-unity/csi-unity:20220303110841
+ dellemc/csi-vxflexos:v2.0.0
+ dellemc/csi-vxflexos:v2.1.0
+ localregistry:5035/csi-operator/dell-csi-operator:v1.7.0
dellemc/sdc:3.5.1.1
dellemc/sdc:3.5.1.1-1
+ dellemc/sdc:3.6
docker.io/busybox:1.32.0
- k8s.gcr.io/sig-storage/csi-attacher:v3.0.0
- k8s.gcr.io/sig-storage/csi-attacher:v3.1.0
- k8s.gcr.io/sig-storage/csi-attacher:v3.2.1
- k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.0.1
- k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.1.0
- k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.2.0
- k8s.gcr.io/sig-storage/csi-provisioner:v2.0.2
- k8s.gcr.io/sig-storage/csi-provisioner:v2.1.0
- k8s.gcr.io/sig-storage/csi-provisioner:v2.2.1
- k8s.gcr.io/sig-storage/csi-resizer:v1.2.0
- k8s.gcr.io/sig-storage/csi-snapshotter:v3.0.2
- k8s.gcr.io/sig-storage/csi-snapshotter:v3.0.3
- k8s.gcr.io/sig-storage/csi-snapshotter:v4.0.0
- k8s.gcr.io/sig-storage/csi-snapshotter:v4.1.0
- quay.io/k8scsi/csi-resizer:v1.0.0
- quay.io/k8scsi/csi-resizer:v1.1.0
-
-*
-* Saving images
+ ...
+ ...
*
* Copying necessary files
- /dell/git/dell-csi-operator/config
- /dell/git/dell-csi-operator/deploy
- /dell/git/dell-csi-operator/samples
- /dell/git/dell-csi-operator/scripts
- /dell/git/dell-csi-operator/README.md
- /dell/git/dell-csi-operator/LICENSE
+ /root/dell-csi-operator/driverconfig
+ /root/dell-csi-operator/deploy
+ /root/dell-csi-operator/samples
+ /root/dell-csi-operator/scripts
+ /root/dell-csi-operator/OLM.md
+ /root/dell-csi-operator/README.md
+ /root/dell-csi-operator/LICENSE
*
* Compressing release
-dell-csi-operator-bundle/
-dell-csi-operator-bundle/samples/
-...
-
-...
-dell-csi-operator-bundle/LICENSE
-dell-csi-operator-bundle/README.md
+ dell-csi-operator-bundle/
+ dell-csi-operator-bundle/driverconfig/
+ dell-csi-operator-bundle/driverconfig/config.yaml
+ dell-csi-operator-bundle/driverconfig/isilon_v200_v119.json
+ dell-csi-operator-bundle/driverconfig/isilon_v200_v120.json
+ dell-csi-operator-bundle/driverconfig/isilon_v200_v121.json
+ dell-csi-operator-bundle/driverconfig/isilon_v200_v122.json
+ dell-csi-operator-bundle/driverconfig/isilon_v210_v120.json
+ dell-csi-operator-bundle/driverconfig/isilon_v210_v121.json
+ dell-csi-operator-bundle/driverconfig/isilon_v210_v122.json
+ dell-csi-operator-bundle/driverconfig/isilon_v220_v121.json
+ dell-csi-operator-bundle/driverconfig/isilon_v220_v122.json
+ dell-csi-operator-bundle/driverconfig/isilon_v220_v123.json
+ dell-csi-operator-bundle/driverconfig/powermax_v200_v119.json
+ ...
+ ...
*
* Complete
-Offline bundle file is: /dell/git/dell-csi-operator/dell-csi-operator-bundle.tar.gz
+Offline bundle file is: /root/dell-csi-operator/dell-csi-operator-bundle.tar.gz
+
```
### Unpacking the offline bundle and preparing for installation
@@ -161,7 +152,7 @@ The script will then perform the following steps:
An example of preparing the bundle for installation (192.168.75.40:5000 refers to an image registry accessible to Kubernetes/OpenShift):
```
-[user@anothersystem /tmp]# tar xvfz dell-csi-operator-bundle.tar.gz
+tar xvfz dell-csi-operator-bundle.tar.gz
dell-csi-operator-bundle/
dell-csi-operator-bundle/samples/
...
@@ -171,99 +162,87 @@ dell-csi-operator-bundle/LICENSE
dell-csi-operator-bundle/README.md
```
```
-[user@anothersystem /tmp]# cd dell-csi-operator-bundle
+cd dell-csi-operator-bundle
```
```
-[user@anothersystem /tmp/dell-csi-operator-bundle]# scripts/csi-offline-bundle.sh -p -r 192.168.75.40:5000/operator
-Preparing an offline bundle for installation
+[root@user scripts]# ./csi-offline-bundle.sh -p -r localregistry:5000/csi-operator
+Preparing a offline bundle for installation
*
* Loading docker images
+ 5b1fa8e3e100: Loading layer [==================================================>] 3.697MB/3.697MB
+ e20ed4c73206: Loading layer [==================================================>] 17.22MB/17.22MB
+ Loaded image: k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.5.0
+ d72a74c56330: Loading layer [==================================================>] 3.031MB/3.031MB
+ f2d2ab12e2a7: Loading layer [==================================================>] 48.08MB/48.08MB
+ Loaded image: k8s.gcr.io/sig-storage/csi-provisioner:v2.0.2
+ 417cb9b79ade: Loading layer [==================================================>] 3.062MB/3.062MB
+ 61fefb35ccee: Loading layer [==================================================>] 16.88MB/16.88MB
+ Loaded image: k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.2.0
+ 7a5b9c0b4b14: Loading layer [==================================================>] 3.031MB/3.031MB
+ 1555ad6e2d44: Loading layer [==================================================>] 49.86MB/49.86MB
+ Loaded image: k8s.gcr.io/sig-storage/csi-provisioner:v2.1.0
+ 2de1422d5d2d: Loading layer [==================================================>] 54.56MB/54.56MB
+ Loaded image: k8s.gcr.io/sig-storage/csi-provisioner:v2.2.1
+ 25a1c1010608: Loading layer [==================================================>] 54.54MB/54.54MB
+ Loaded image: k8s.gcr.io/sig-storage/csi-provisioner:v2.2.2
+ 07363fa84210: Loading layer [==================================================>] 3.062MB/3.062MB
+ 5227e51ea570: Loading layer [==================================================>] 54.92MB/54.92MB
+ Loaded image: k8s.gcr.io/sig-storage/csi-provisioner:v3.0.0
+ cfb5cbeabdb2: Loading layer [==================================================>] 55.38MB/55.38MB
+ Loaded image: k8s.gcr.io/sig-storage/csi-provisioner:v3.1.0
+ ...
+ ...
*
* Tagging and pushing images
- dellemc/csi-isilon:v1.4.0.000R -> amaas-eos-mw1.cec.lab.emc.com:5028/csi-operator/csi-isilon:v1.4.0.000R
- dellemc/csi-isilon:v1.5.0 -> amaas-eos-mw1.cec.lab.emc.com:5028/csi-operator/csi-isilon:v1.5.0
- dellemc/csi-isilon:v1.6.0 -> amaas-eos-mw1.cec.lab.emc.com:5028/csi-operator/csi-isilon:v1.6.0
- dellemc/csipowermax-reverseproxy:v1.3.0 -> amaas-eos-mw1.cec.lab.emc.com:5028/csi-operator/csipowermax-reverseproxy:v1.3.0
- dellemc/csi-powermax:v1.5.0.000R -> amaas-eos-mw1.cec.lab.emc.com:5028/csi-operator/csi-powermax:v1.5.0.000R
- dellemc/csi-powermax:v1.6.0 -> amaas-eos-mw1.cec.lab.emc.com:5028/csi-operator/csi-powermax:v1.6.0
- dellemc/csi-powermax:v1.7.0 -> amaas-eos-mw1.cec.lab.emc.com:5028/csi-operator/csi-powermax:v1.7.0
- dellemc/csi-powerstore:v1.2.0.000R -> amaas-eos-mw1.cec.lab.emc.com:5028/csi-operator/csi-powerstore:v1.2.0.000R
- dellemc/csi-powerstore:v1.3.0 -> amaas-eos-mw1.cec.lab.emc.com:5028/csi-operator/csi-powerstore:v1.3.0
- dellemc/csi-powerstore:v1.4.0 -> amaas-eos-mw1.cec.lab.emc.com:5028/csi-operator/csi-powerstore:v1.4.0
- dellemc/csi-unity:v1.4.0.000R -> amaas-eos-mw1.cec.lab.emc.com:5028/csi-operator/csi-unity:v1.4.0.000R
- dellemc/csi-unity:v1.5.0 -> amaas-eos-mw1.cec.lab.emc.com:5028/csi-operator/csi-unity:v1.5.0
- dellemc/csi-unity:v1.6.0 -> amaas-eos-mw1.cec.lab.emc.com:5028/csi-operator/csi-unity:v1.6.0
- dellemc/csi-vxflexos:v1.3.0.000R -> amaas-eos-mw1.cec.lab.emc.com:5028/csi-operator/csi-vxflexos:v1.3.0.000R
- dellemc/csi-vxflexos:v1.4.0 -> amaas-eos-mw1.cec.lab.emc.com:5028/csi-operator/csi-vxflexos:v1.4.0
- dellemc/csi-vxflexos:v1.5.0 -> amaas-eos-mw1.cec.lab.emc.com:5028/csi-operator/csi-vxflexos:v1.5.0
- dellemc/dell-csi-operator:v1.4.0 -> amaas-eos-mw1.cec.lab.emc.com:5028/csi-operator/dell-csi-operator:v1.4.0
- dellemc/sdc:3.5.1.1 -> amaas-eos-mw1.cec.lab.emc.com:5028/csi-operator/sdc:3.5.1.1
- dellemc/sdc:3.5.1.1-1 -> amaas-eos-mw1.cec.lab.emc.com:5028/csi-operator/sdc:3.5.1.1-1
- docker.io/busybox:1.32.0 -> amaas-eos-mw1.cec.lab.emc.com:5028/csi-operator/busybox:1.32.0
- k8s.gcr.io/sig-storage/csi-attacher:v3.0.0 -> amaas-eos-mw1.cec.lab.emc.com:5028/csi-operator/csi-attacher:v3.0.0
- k8s.gcr.io/sig-storage/csi-attacher:v3.1.0 -> amaas-eos-mw1.cec.lab.emc.com:5028/csi-operator/csi-attacher:v3.1.0
- k8s.gcr.io/sig-storage/csi-attacher:v3.2.1 -> amaas-eos-mw1.cec.lab.emc.com:5028/csi-operator/csi-attacher:v3.2.1
- k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.0.1 -> amaas-eos-mw1.cec.lab.emc.com:5028/csi-operator/csi-node-driver-registrar:v2.0.1
- k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.1.0 -> amaas-eos-mw1.cec.lab.emc.com:5028/csi-operator/csi-node-driver-registrar:v2.1.0
- k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.2.0 -> amaas-eos-mw1.cec.lab.emc.com:5028/csi-operator/csi-node-driver-registrar:v2.2.0
- k8s.gcr.io/sig-storage/csi-provisioner:v2.0.2 -> amaas-eos-mw1.cec.lab.emc.com:5028/csi-operator/csi-provisioner:v2.0.2
- k8s.gcr.io/sig-storage/csi-provisioner:v2.1.0 -> amaas-eos-mw1.cec.lab.emc.com:5028/csi-operator/csi-provisioner:v2.1.0
- k8s.gcr.io/sig-storage/csi-provisioner:v2.2.1 -> amaas-eos-mw1.cec.lab.emc.com:5028/csi-operator/csi-provisioner:v2.2.1
- k8s.gcr.io/sig-storage/csi-resizer:v1.2.0 -> amaas-eos-mw1.cec.lab.emc.com:5028/csi-operator/csi-resizer:v1.2.0
- k8s.gcr.io/sig-storage/csi-snapshotter:v3.0.2 -> amaas-eos-mw1.cec.lab.emc.com:5028/csi-operator/csi-snapshotter:v3.0.2
- k8s.gcr.io/sig-storage/csi-snapshotter:v3.0.3 -> amaas-eos-mw1.cec.lab.emc.com:5028/csi-operator/csi-snapshotter:v3.0.3
- k8s.gcr.io/sig-storage/csi-snapshotter:v4.0.0 -> amaas-eos-mw1.cec.lab.emc.com:5028/csi-operator/csi-snapshotter:v4.0.0
- k8s.gcr.io/sig-storage/csi-snapshotter:v4.1.0 -> amaas-eos-mw1.cec.lab.emc.com:5028/csi-operator/csi-snapshotter:v4.1.0
- quay.io/k8scsi/csi-resizer:v1.0.0 -> amaas-eos-mw1.cec.lab.emc.com:5028/csi-operator/csi-resizer:v1.0.0
- quay.io/k8scsi/csi-resizer:v1.1.0 -> amaas-eos-mw1.cec.lab.emc.com:5028/csi-operator/csi-resizer:v1.1.0
+ localregistry:5035/csi-operator/dell-csi-operator:v1.7.0 -> localregistry:5000/csi-operator/dell-csi-operator:v1.7.0
+ dellemc/csi-isilon:v2.0.0 -> localregistry:5000/csi-operator/csi-isilon:v2.0.0
+ dellemc/csi-isilon:v2.1.0 -> localregistry:5000/csi-operator/csi-isilon:v2.1.0
+ dellemc/csipowermax-reverseproxy:v1.4.0 -> localregistry:5000/csi-operator/csipowermax-reverseproxy:v1.4.0
+ dellemc/csi-powermax:v2.0.0 -> localregistry:5000/csi-operator/csi-powermax:v2.0.0
+ dellemc/csi-powermax:v2.1.0 -> localregistry:5000/csi-operator/csi-powermax:v2.1.0
+ dellemc/csi-powerstore:v2.0.0 -> localregistry:5000/csi-operator/csi-powerstore:v2.0.0
+ dellemc/csi-powerstore:v2.1.0 -> localregistry:5000/csi-operator/csi-powerstore:v2.1.0
+ dellemc/csi-unity:nightly -> localregistry:5000/csi-operator/csi-unity:nightly
+ dellemc/csi-unity:v2.0.0 -> localregistry:5000/csi-operator/csi-unity:v2.0.0
+ dellemc/csi-unity:v2.1.0 -> localregistry:5000/csi-operator/csi-unity:v2.1.0
+ dellemc/csi-vxflexos:v2.0.0 -> localregistry:5000/csi-operator/csi-vxflexos:v2.0.0
+ dellemc/csi-vxflexos:v2.1.0 -> localregistry:5000/csi-operator/csi-vxflexos:v2.1.0
+ dellemc/sdc:3.5.1.1 -> localregistry:5000/csi-operator/sdc:3.5.1.1
+ dellemc/sdc:3.5.1.1-1 -> localregistry:5000/csi-operator/sdc:3.5.1.1-1
+ dellemc/sdc:3.6 -> localregistry:5000/csi-operator/sdc:3.6
+ docker.io/busybox:1.32.0 -> localregistry:5000/csi-operator/busybox:1.32.0
+ ...
+ ...
*
-* Preparing operator files within /tmp/dell-csi-operator-bundle
-
- changing: dellemc/csi-isilon:v1.4.0.000R -> amaas-eos-mw1.cec.lab.emc.com:5028/csi-operator/csi-isilon:v1.4.0.000R
- changing: dellemc/csi-isilon:v1.5.0 -> amaas-eos-mw1.cec.lab.emc.com:5028/csi-operator/csi-isilon:v1.5.0
- changing: dellemc/csi-isilon:v1.6.0 -> amaas-eos-mw1.cec.lab.emc.com:5028/csi-operator/csi-isilon:v1.6.0
- changing: dellemc/csipowermax-reverseproxy:v1.3.0 -> amaas-eos-mw1.cec.lab.emc.com:5028/csi-operator/csipowermax-reverseproxy:v1.3.0
- changing: dellemc/csi-powermax:v1.5.0.000R -> amaas-eos-mw1.cec.lab.emc.com:5028/csi-operator/csi-powermax:v1.5.0.000R
- changing: dellemc/csi-powermax:v1.6.0 -> amaas-eos-mw1.cec.lab.emc.com:5028/csi-operator/csi-powermax:v1.6.0
- changing: dellemc/csi-powermax:v1.7.0 -> amaas-eos-mw1.cec.lab.emc.com:5028/csi-operator/csi-powermax:v1.7.0
- changing: dellemc/csi-powerstore:v1.2.0.000R -> amaas-eos-mw1.cec.lab.emc.com:5028/csi-operator/csi-powerstore:v1.2.0.000R
- changing: dellemc/csi-powerstore:v1.3.0 -> amaas-eos-mw1.cec.lab.emc.com:5028/csi-operator/csi-powerstore:v1.3.0
- changing: dellemc/csi-powerstore:v1.4.0 -> amaas-eos-mw1.cec.lab.emc.com:5028/csi-operator/csi-powerstore:v1.4.0
- changing: dellemc/csi-unity:v1.4.0.000R -> amaas-eos-mw1.cec.lab.emc.com:5028/csi-operator/csi-unity:v1.4.0.000R
- changing: dellemc/csi-unity:v1.5.0 -> amaas-eos-mw1.cec.lab.emc.com:5028/csi-operator/csi-unity:v1.5.0
- changing: dellemc/csi-unity:v1.6.0 -> amaas-eos-mw1.cec.lab.emc.com:5028/csi-operator/csi-unity:v1.6.0
- changing: dellemc/csi-vxflexos:v1.3.0.000R -> amaas-eos-mw1.cec.lab.emc.com:5028/csi-operator/csi-vxflexos:v1.3.0.000R
- changing: dellemc/csi-vxflexos:v1.4.0 -> amaas-eos-mw1.cec.lab.emc.com:5028/csi-operator/csi-vxflexos:v1.4.0
- changing: dellemc/csi-vxflexos:v1.5.0 -> amaas-eos-mw1.cec.lab.emc.com:5028/csi-operator/csi-vxflexos:v1.5.0
- changing: dellemc/dell-csi-operator:v1.4.0 -> amaas-eos-mw1.cec.lab.emc.com:5028/csi-operator/dell-csi-operator:v1.4.0
- changing: dellemc/sdc:3.5.1.1 -> amaas-eos-mw1.cec.lab.emc.com:5028/csi-operator/sdc:3.5.1.1
- changing: dellemc/sdc:3.5.1.1-1 -> amaas-eos-mw1.cec.lab.emc.com:5028/csi-operator/sdc:3.5.1.1-1
- changing: docker.io/busybox:1.32.0 -> amaas-eos-mw1.cec.lab.emc.com:5028/csi-operator/busybox:1.32.0
- changing: k8s.gcr.io/sig-storage/csi-attacher:v3.0.0 -> amaas-eos-mw1.cec.lab.emc.com:5028/csi-operator/csi-attacher:v3.0.0
- changing: k8s.gcr.io/sig-storage/csi-attacher:v3.1.0 -> amaas-eos-mw1.cec.lab.emc.com:5028/csi-operator/csi-attacher:v3.1.0
- changing: k8s.gcr.io/sig-storage/csi-attacher:v3.2.1 -> amaas-eos-mw1.cec.lab.emc.com:5028/csi-operator/csi-attacher:v3.2.1
- changing: k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.0.1 -> amaas-eos-mw1.cec.lab.emc.com:5028/csi-operator/csi-node-driver-registrar:v2.0.1
- changing: k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.1.0 -> amaas-eos-mw1.cec.lab.emc.com:5028/csi-operator/csi-node-driver-registrar:v2.1.0
- changing: k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.2.0 -> amaas-eos-mw1.cec.lab.emc.com:5028/csi-operator/csi-node-driver-registrar:v2.2.0
- changing: k8s.gcr.io/sig-storage/csi-provisioner:v2.0.2 -> amaas-eos-mw1.cec.lab.emc.com:5028/csi-operator/csi-provisioner:v2.0.2
- changing: k8s.gcr.io/sig-storage/csi-provisioner:v2.1.0 -> amaas-eos-mw1.cec.lab.emc.com:5028/csi-operator/csi-provisioner:v2.1.0
- changing: k8s.gcr.io/sig-storage/csi-provisioner:v2.2.1 -> amaas-eos-mw1.cec.lab.emc.com:5028/csi-operator/csi-provisioner:v2.2.1
- changing: k8s.gcr.io/sig-storage/csi-resizer:v1.2.0 -> amaas-eos-mw1.cec.lab.emc.com:5028/csi-operator/csi-resizer:v1.2.0
- changing: k8s.gcr.io/sig-storage/csi-snapshotter:v3.0.2 -> amaas-eos-mw1.cec.lab.emc.com:5028/csi-operator/csi-snapshotter:v3.0.2
- changing: k8s.gcr.io/sig-storage/csi-snapshotter:v3.0.3 -> amaas-eos-mw1.cec.lab.emc.com:5028/csi-operator/csi-snapshotter:v3.0.3
- changing: k8s.gcr.io/sig-storage/csi-snapshotter:v4.0.0 -> amaas-eos-mw1.cec.lab.emc.com:5028/csi-operator/csi-snapshotter:v4.0.0
- changing: k8s.gcr.io/sig-storage/csi-snapshotter:v4.1.0 -> amaas-eos-mw1.cec.lab.emc.com:5028/csi-operator/csi-snapshotter:v4.1.0
- changing: quay.io/k8scsi/csi-resizer:v1.0.0 -> amaas-eos-mw1.cec.lab.emc.com:5028/csi-operator/csi-resizer:v1.0.0
- changing: quay.io/k8scsi/csi-resizer:v1.1.0 -> amaas-eos-mw1.cec.lab.emc.com:5028/csi-operator/csi-resizer:v1.1.0
-
+* Preparing operator files within /root/dell-csi-operator-bundle
+
+ changing: localregistry:5000/csi-operator/dell-csi-operator:v1.7.0 -> localregistry:5000/csi-operator/dell-csi-operator:v1.7.0
+ changing: dellemc/csi-isilon:v2.0.0 -> localregistry:5000/csi-operator/csi-isilon:v2.0.0
+ changing: dellemc/csi-isilon:v2.1.0 -> localregistry:5000/csi-operator/csi-isilon:v2.1.0
+ changing: dellemc/csipowermax-reverseproxy:v1.4.0 -> localregistry:5000/csi-operator/csipowermax-reverseproxy:v1.4.0
+ changing: dellemc/csi-powermax:v2.0.0 -> localregistry:5000/csi-operator/csi-powermax:v2.0.0
+ changing: dellemc/csi-powermax:v2.1.0 -> localregistry:5000/csi-operator/csi-powermax:v2.1.0
+ changing: dellemc/csi-powerstore:v2.0.0 -> localregistry:5000/csi-operator/csi-powerstore:v2.0.0
+ changing: dellemc/csi-powerstore:v2.1.0 -> localregistry:5000/csi-operator/csi-powerstore:v2.1.0
+ changing: dellemc/csi-unity:nightly -> localregistry:5000/csi-operator/csi-unity:nightly
+ changing: dellemc/csi-unity:v2.0.0 -> localregistry:5000/csi-operator/csi-unity:v2.0.0
+ changing: dellemc/csi-unity:v2.1.0 -> localregistry:5000/csi-operator/csi-unity:v2.1.0
+ changing: dellemc/csi-vxflexos:v2.0.0 -> localregistry:5000/csi-operator/csi-vxflexos:v2.0.0
+ changing: dellemc/csi-vxflexos:v2.1.0 -> localregistry:5000/csi-operator/csi-vxflexos:v2.1.0
+ changing: dellemc/sdc:3.5.1.1 -> localregistry:5000/csi-operator/sdc:3.5.1.1
+ changing: dellemc/sdc:3.5.1.1-1 -> localregistry:5000/csi-operator/sdc:3.5.1.1-1
+ changing: dellemc/sdc:3.6 -> localregistry:5000/csi-operator/sdc:3.6
+ changing: docker.io/busybox:1.32.0 -> localregistry:5000/csi-operator/busybox:1.32.0
+ ...
+ ...
+
*
* Complete
-
```
### Perform either a Helm installation or Operator installation
diff --git a/content/v3/csidriver/installation/operator/_index.md b/content/v3/csidriver/installation/operator/_index.md
index 468761f0f6..71140cd643 100644
--- a/content/v3/csidriver/installation/operator/_index.md
+++ b/content/v3/csidriver/installation/operator/_index.md
@@ -1,28 +1,28 @@
---
-title: "Dell CSI Operator Installation Process"
+title: "CSI Driver installation using Dell CSI Operator"
linkTitle: "Using Operator"
weight: 4
description: >
Installation of CSI drivers using Dell CSI Operator
---
-The Dell CSI Operator is a Kubernetes Operator, which can be used to install and manage the CSI Drivers provided by Dell EMC for various storage platforms. This operator is available as a community operator for upstream Kubernetes and can be deployed using OperatorHub.io. It is also available as a certified operator for OpenShift clusters and can be deployed using the OpenShift Container Platform. Both these methods of installation use OLM (Operator Lifecycle Manager). The operator can also be deployed manually.
+The Dell CSI Operator is a Kubernetes Operator, which can be used to install and manage the CSI Drivers provided by Dell for various storage platforms. This operator is available as a community operator for upstream Kubernetes and can be deployed using OperatorHub.io. It is also available as a certified operator for OpenShift clusters and can be deployed using the OpenShift Container Platform. Both these methods of installation use OLM (Operator Lifecycle Manager). The operator can also be deployed manually.
## Prerequisites
#### Volume Snapshot CRD's
-The Kubernetes Volume Snapshot CRDs can be obtained and installed from the external-snapshotter project on Github. Manifests are available here:[v4.2.x](https://github.com/kubernetes-csi/external-snapshotter/tree/v4.2.0/client/config/crd)
+The Kubernetes Volume Snapshot CRDs can be obtained and installed from the external-snapshotter project on Github. Manifests are available here:[v5.0.x](https://github.com/kubernetes-csi/external-snapshotter/tree/v5.0.1/client/config/crd)
#### Volume Snapshot Controller
The CSI external-snapshotter sidecar is split into two controllers:
- A common snapshot controller
- A CSI external-snapshotter sidecar
-The common snapshot controller must be installed only once in the cluster irrespective of the number of CSI drivers installed in the cluster. On OpenShift clusters 4.4 and later, the common snapshot-controller is pre-installed. In the clusters where it is not present, it can be installed using `kubectl` and the manifests are available here: [v4.2.x](https://github.com/kubernetes-csi/external-snapshotter/tree/v4.2.0/deploy/kubernetes/snapshot-controller)
+The common snapshot controller must be installed only once in the cluster irrespective of the number of CSI drivers installed in the cluster. On OpenShift clusters 4.4 and later, the common snapshot-controller is pre-installed. In the clusters where it is not present, it can be installed using `kubectl` and the manifests are available here: [v5.0.x](https://github.com/kubernetes-csi/external-snapshotter/tree/v5.0.1/deploy/kubernetes/snapshot-controller)
*NOTE:*
- The manifests available on GitHub install the snapshotter image:
- - [quay.io/k8scsi/csi-snapshotter:v4.0.x](https://quay.io/repository/k8scsi/csi-snapshotter?tag=v4.0.0&tab=tags)
+ - [quay.io/k8scsi/csi-snapshotter:v5.0.1](https://quay.io/repository/k8scsi/csi-snapshotter?tag=v5.0.1&tab=tags)
- The CSI external-snapshotter sidecar is still installed along with the driver and does not involve any extra configuration.
#### Installation example
@@ -37,7 +37,7 @@ kubectl create -f deploy/kubernetes/snapshot-controller
```
*NOTE:*
-- It is recommended to use 4.2.x version of snapshotter/snapshot-controller.
+- It is recommended to use 5.0.x version of snapshotter/snapshot-controller.
## Installation
@@ -50,21 +50,21 @@ If you have installed an old version of the `dell-csi-operator` which was availa
#### Full list of CSI Drivers and versions supported by the Dell CSI Operator
| CSI Driver | Version | ConfigVersion | Kubernetes Version | OpenShift Version |
| ------------------ | --------- | -------------- | -------------------- | --------------------- |
-| CSI PowerMax | 1.7 | v6 | 1.19, 1.20, 1.21 | 4.6, 4.7 |
| CSI PowerMax | 2.0.0 | v2.0.0 | 1.20, 1.21, 1.22 | 4.6 EUS, 4.7, 4.8 |
| CSI PowerMax | 2.1.0 | v2.1.0 | 1.20, 1.21, 1.22 | 4.8, 4.8 EUS, 4.9 |
-| CSI PowerFlex | 1.5 | v5 | 1.19, 1.20, 1.21 | 4.6, 4.7 |
+| CSI PowerMax | 2.2.0 | v2.2.0 | 1.21, 1.22, 1.23 | 4.8, 4.8 EUS, 4.9 |
| CSI PowerFlex | 2.0.0 | v2.0.0 | 1.20, 1.21, 1.22 | 4.6 EUS, 4.7, 4.8 |
| CSI PowerFlex | 2.1.0 | v2.1.0 | 1.20, 1.21, 1.22 | 4.8, 4.8 EUS, 4.9 |
-| CSI PowerScale | 1.6 | v6 | 1.19, 1.20, 1.21 | 4.6, 4.7 |
+| CSI PowerFlex | 2.2.0 | v2.2.0 | 1.21, 1.22, 1.23 | 4.8, 4.8 EUS, 4.9 |
| CSI PowerScale | 2.0.0 | v2.0.0 | 1.20, 1.21, 1.22 | 4.6 EUS, 4.7, 4.8 |
| CSI PowerScale | 2.1.0 | v2.1.0 | 1.20, 1.21, 1.22 | 4.8, 4.8 EUS, 4.9 |
-| CSI Unity | 1.6 | v5 | 1.19, 1.20, 1.21 | 4.6, 4.7 |
+| CSI PowerScale | 2.2.0 | v2.2.0 | 1.21, 1.22, 1.23 | 4.8, 4.8 EUS, 4.9 |
| CSI Unity | 2.0.0 | v2.0.0 | 1.20, 1.21, 1.22 | 4.6 EUS, 4.7, 4.8 |
| CSI Unity | 2.1.0 | v2.1.0 | 1.20, 1.21, 1.22 | 4.8, 4.8 EUS, 4.9 |
-| CSI PowerStore | 1.4 | v4 | 1.19, 1.20, 1.21 | 4.6, 4.7 |
+| CSI Unity | 2.2.0 | v2.2.0 | 1.21, 1.22, 1.23 | 4.8, 4.8 EUS, 4.9 |
| CSI PowerStore | 2.0.0 | v2.0.0 | 1.20, 1.21, 1.22 | 4.6 EUS, 4.7, 4.8 |
| CSI PowerStore | 2.1.0 | v2.1.0 | 1.20, 1.21, 1.22 | 4.8, 4.8 EUS, 4.9 |
+| CSI PowerStore | 2.2.0 | v2.2.0 | 1.21, 1.22, 1.23 | 4.8, 4.8 EUS, 4.9 |
@@ -76,7 +76,7 @@ The installation process involves the creation of a `Subscription` object either
* _Automatic_ - If you want the Operator to be automatically installed or upgraded (once an upgrade becomes available)
* _Manual_ - If you want a Cluster Administrator to manually review and approve the `InstallPlan` for installation/upgrades
-**NOTE**: The recommended version of OLM for upstream Kubernetes is **`v0.18.3`**.
+**NOTE**: The recommended version of OLM for upstream Kubernetes is **`v0.18.2`**.
#### Pre-Requisite for installation with OLM
Please run the following commands for creating the required `ConfigMap` before installing the `dell-csi-operator` using OLM.
@@ -98,8 +98,9 @@ $ kubectl create configmap dell-csi-operator-config --from-file config.tar.gz -n
>**Skip step 1 for "offline bundle installation" and continue using the workspace created by untar of dell-csi-operator-bundle.tar.gz.**
1. Clone the [Dell CSI Operator repository](https://github.com/dell/dell-csi-operator).
-2. git checkout dell-csi-operator-
-3. Run `bash scripts/install.sh` to install the operator.
+2. cd dell-csi-operator
+3. git checkout dell-csi-operator-`your-version'
+4. Run `bash scripts/install.sh` to install the operator.
>NOTE: Dell CSI Operator version 1.4.0 and higher would install to the 'dell-csi-operator' namespace by default.
Any existing installations of Dell CSI Operator (v1.2.0 or later) installed using `install.sh` to the 'default' or 'dell-csi-operator' namespace can be upgraded to the new version by running `install.sh --upgrade`.
@@ -126,8 +127,7 @@ For installation of the supported drivers, a `CustomResource` has to be created
### Pre-requisites for upstream Kubernetes Clusters
On upstream Kubernetes clusters, make sure to install
* VolumeSnapshot CRDs
- * On clusters running v1.20,v1.21 & v1.22, make sure to install v1 VolumeSnapshot CRDs
- * On clusters running v1.19, make sure to install v1beta1 VolumeSnapshot CRDs
+ * On clusters running v1.21,v1.22 & v1.23, make sure to install v1 VolumeSnapshot CRDs
* External Volume Snapshot Controller with the correct version
### Pre-requisites for Red Hat OpenShift Clusters
@@ -210,36 +210,6 @@ Finally, you have to restart the service by providing the command
For additional information refer to official documentation of the multipath configuration.
-## Replacing CSI Operator with Dell CSI Operator
-`Dell CSI Operator` was previously available, with the name `CSI Operator`, for both manual and OLM installation.
-`CSI Operator` has been discontinued and has been renamed to `Dell CSI Operator`. This is just a name change and as a result,
-the Kubernetes resources created as part of the Operator deployment will use the name `dell-csi-operator` instead of `csi-operator`.
-
-Before proceeding with the installation of the new `Dell CSI Operator`, any existing `CSI Operator` installation has to be completely
-removed from the cluster.
-
-Note - This **doesn't** impact any of the CSI Drivers which have been installed in the cluster
-
-If the old `CSI Operator` was installed manually, then run the following command from the root of the repository which was used
-originally for installation
-
- bash scripts/undeploy.sh
-
-If you don't have the original repository available, then run the following commands
-
- git clone https://github.com/dell/dell-csi-operator.git
- cd dell-csi-operator
- git checkout csi-operator-v1.0.0
- bash scripts/undeploy.sh
-
-Note - Once you have removed the old `CSI Operator`, then for installing the new `Dell CSI Operator`, you will need to pull/checkout the latest code
-
-If you had installed the old CSI Operator using OLM, then please follow the uninstallation instructions provided by OperatorHub. This will mostly involve:
-
- * Deleting the CSI Operator Subscription
- * Deleting the CSI Operator CSV
-
-
## Installing CSI Driver via Operator
CSI Drivers can be installed by creating a `CustomResource` object in your cluster.
@@ -251,8 +221,8 @@ Or
{driver name}_{driver version}_ops_{OpenShift version}.yaml
For e.g.
-* sample/powermax_v140_k8s_117.yaml* <- To install CSI PowerMax driver v1.4.0 on a Kubernetes 1.17 cluster
-* sample/powermax_v140_ops_46.yaml* <- To install CSI PowerMax driver v1.4.0 on an OpenShift 4.6 cluster
+* samples/powermax_v220_k8s_123.yaml* <- To install CSI PowerMax driver v2.2.0 on a Kubernetes 1.23 cluster
+* samples/powermax_v220_ops_49.yaml* <- To install CSI PowerMax driver v2.2.0 on an OpenShift 4.9 cluster
Copy the correct sample file and edit the mandatory & any optional parameters specific to your driver installation by following the instructions [here](#modify-the-driver-specification)
>NOTE: A detailed explanation of the various mandatory and optional fields in the CustomResource is available [here](#custom-resource-specification). Please make sure to read through and understand the various fields.
@@ -293,14 +263,19 @@ The CSI Drivers installed by the Dell CSI Operator can be updated like any Kuber
# Replace driver-namespace with the namespace where the Unity driver is installed
$ kubectl edit csiunity/unity -n