diff --git a/config.toml b/config.toml index 03cc74d58b..5513265496 100644 --- a/config.toml +++ b/config.toml @@ -169,19 +169,19 @@ enable = false # icon = "fa fa-envelope" # desc = "Discuss development issues around the project" [[params.versions]] - version = "Current(v1.3)" + version = "Current(v1.4)" url = "https://dell.github.io/csm-docs/docs/" [[params.versions]] - version = "v1.2.1" + version = "v1.3" url = "https://dell.github.io/csm-docs/v1" [[params.versions]] - version = "v1.2" + version = "v1.2.1" url = "https://dell.github.io/csm-docs/v2" [[params.versions]] - version = "v1.1" + version = "v1.2" url = "https://dell.github.io/csm-docs/v3" [[menu.main]] diff --git a/content/docs/_index.md b/content/docs/_index.md index 66409655e5..033d626a6f 100644 --- a/content/docs/_index.md +++ b/content/docs/_index.md @@ -1,40 +1,91 @@ --- -title: "Dell Technologies (Dell) Container Storage Modules (CSM)" -linkTitle: "Dell Technologies (Dell) Container Storage Modules (CSM)" +title: "Container Storage Modules" +linkTitle: "Container Storage Modules" weight: 20 menu: main: weight: 20 +no_list: true --- -The Dell Technologies (Dell) Container Storage Modules (CSM) enables simple and consistent integration and automation experiences, extending enterprise storage capabilities to Kubernetes for cloud-native stateful applications. It reduces management complexity so developers can independently consume enterprise storage with ease and automate daily operations such as provisioning, snapshotting, replication, observability, authorization and, resiliency. +The Dell Technologies (Dell) Container Storage Modules (CSM) enables simple and consistent integration and automation experiences, extending enterprise storage capabilities to Kubernetes for cloud-native stateful applications. It reduces management complexity so developers can independently consume enterprise storage with ease and automate daily operations such as provisioning, snapshotting, replication, observability, authorization, application mobility, encryption, and resiliency. CSM Hex Diagram -CSM is made up of multiple components including modules (enterprise capabilities), CSI drivers (storage enablement) and, other related applications (deployment, feature controllers, etc). +CSM is made up of multiple components including modules (enterprise capabilities), CSI drivers (storage enablement), and other related applications (deployment, feature controllers, etc). + +{{< cardpane >}} + {{< card header="[**Authorization**](authorization/)" + footer="Supports [PowerFlex](csidriver/features/powerflex/) [PowerScale](csidriver/features/powerscale/) [PowerMax](csidriver/features/powermax/)">}} + CSM for Authorization provides storage and Kubernetes administrators the ability to apply RBAC for Dell CSI Drivers. It does this by deploying a proxy between the CSI driver and the storage system to enforce role-based access and usage rules.
+[...Learn more](authorization/) + + {{< /card >}} + {{< card header="[**Replication**](replication/)" + footer="Supports [PowerStore](csidriver/features/powerstore/) [PowerScale](csidriver/features/powerscale/) [PowerMax](csidriver/features/powermax/)">}} + CSM for Replication project aims to bring Replication & Disaster Recovery capabilities of Dell Storage Arrays to Kubernetes clusters. It helps you replicate groups of volumes and can provide you a way to restart applications in case of both planned and unplanned migration. +[...Learn more](replication/) +{{< /card >}} +{{< /cardpane >}} +{{< cardpane >}} +{{< card header="[**Resiliency**](resiliency/)" + footer="Supports [PowerFlex](csidriver/features/powerflex/) [PowerScale](csidriver/features/powerscale/) [Unity](csidriver/features/unity/)">}} + CSM for Resiliency is designed to make Kubernetes Applications, including those that utilize persistent storage, more resilient to various failures. +[...Learn more](resiliency/) + {{< /card >}} +{{< card header="[**Observability**](observability/)" + footer="Supports [PowerFlex](csidriver/features/powerflex/) [PowerStore](csidriver/features/powerstore/)">}} + CSM for Observability provides visibility on the capacity of the volumes/file shares that is being managed with Dell CSM CSI (Container Storage Interface) drivers along with their performance in terms of bandwidth, IOPS, and response time. +[...Learn more](observability/) + {{< /card >}} +{{< /cardpane >}} +{{< cardpane >}} +{{< card header="[**Application Mobility**](applicationmobility/)" + footer="Supports all platforms">}} + Container Storage Modules for Application Mobility provide Kubernetes administrators the ability to clone their stateful application workloads and application data to other clusters, either on-premise or in the cloud. + [...Learn more](applicationmobility/) + {{< /card >}} + {{< card header="[**Encryption**](secure/encryption)" + footer="Supports PowerScale">}} + Encryption provides the capability to encrypt user data residing on volumes created by Dell CSI Drivers. + [...Learn more](secure/encryption/) + {{< /card >}} +{{< /cardpane >}} +{{< cardpane >}} + {{< card header="[License](license/)" + footer="Required for [Application Mobility](applicationmobility/) & [Encryption](secure/encryption/)">}} + The tech-preview releases of Application Mobility and Encryption require a license. + Request a license using the [Container Storage Modules License Request](https://app.smartsheet.com/b/form/5e46fad643874d56b1f9cf4c9f3071fb) by providing the requested details. + [...Learn more](license/) + {{< /card >}} +{{< /cardpane >}} CSM Diagram ## CSM Supported Modules and Dell CSI Drivers -| Modules/Drivers | CSM 1.3 | [CSM 1.2.1](../v1/) | [CSM 1.2](../v2/) | [CSM 1.1](../v3/) | +| Modules/Drivers | CSM 1.4 | [CSM 1.3](../v1/) | [CSM 1.2.1](../v2/) | [CSM 1.2](../v3/) | | - | :-: | :-: | :-: | :-: | -| [Authorization](https://hub.docker.com/r/dellemc/csm-authorization-sidecar) | v1.3.0 | v1.2.0 | v1.2.0 | v1.1.0 | -| [Observability](https://hub.docker.com/r/dellemc/csm-topology) | v1.2.0 | v1.1.1 | v1.1.0 | v1.0.1 | -| [Replication](https://hub.docker.com/r/dellemc/dell-csi-replicator) | v1.3.0 | v1.2.0 | v1.2.0 | v1.1.0 | -| [Resiliency](https://hub.docker.com/r/dellemc/podmon) | v1.2.0 | v1.1.0 | v1.1.0 | v1.0.1 | -| [CSI Driver for PowerScale](https://hub.docker.com/r/dellemc/csi-isilon/tags) | v2.3.0 | v2.2.0 | v2.2.0 | v2.1.0 | -| [CSI Driver for Unity XT](https://hub.docker.com/r/dellemc/csi-unity/tags) | v2.3.0 | v2.2.0 | v2.2.0 | v2.1.0 | -| [CSI Driver for PowerStore](https://hub.docker.com/r/dellemc/csi-powerstore/tags) | v2.3.0 | v2.2.0 | v2.2.0| v2.1.0 | -| [CSI Driver for PowerFlex](https://hub.docker.com/r/dellemc/csi-vxflexos/tags) | v2.3.0 | v2.2.0 | v2.2.0 | v2.1.0 | -| [CSI Driver for PowerMax](https://hub.docker.com/r/dellemc/csi-powermax/tags) | v2.3.0 | v2.2.0 | v2.2.0 | v2.1.0 | +| [Authorization](https://hub.docker.com/r/dellemc/csm-authorization-sidecar) | v1.4.0 | v1.3.0 | v1.2.0 | v1.2.0 | +| [Observability](https://hub.docker.com/r/dellemc/csm-topology) | v1.3.0 | v1.2.0 | v1.1.1 | v1.1.0 | +| [Replication](https://hub.docker.com/r/dellemc/dell-csi-replicator) | v1.3.0 | v1.3.0 | v1.2.0 | v1.2.0 | +| [Resiliency](https://hub.docker.com/r/dellemc/podmon) | v1.3.0 | v1.2.0 | v1.1.0 | v1.1.0 | +| [Encryption](https://hub.docker.com/r/dellemc/csm-encryption) | v0.1.0 | NA | NA | NA | +| [Application Mobility](https://hub.docker.com/r/dellemc/csm-application-mobility-controller) | v0.1.0 | NA | NA | NA | +| [CSI Driver for PowerScale](https://hub.docker.com/r/dellemc/csi-isilon/tags) | v2.4.0 | v2.3.0 | v2.2.0 | v2.2.0 | +| [CSI Driver for Unity XT](https://hub.docker.com/r/dellemc/csi-unity/tags) | v2.4.0 | v2.3.0 | v2.2.0 | v2.2.0 | +| [CSI Driver for PowerStore](https://hub.docker.com/r/dellemc/csi-powerstore/tags) | v2.4.0 | v2.3.0 | v2.2.0| v2.2.0 | +| [CSI Driver for PowerFlex](https://hub.docker.com/r/dellemc/csi-vxflexos/tags) | v2.4.0 | v2.3.0 | v2.2.0 | v2.2.0 | +| [CSI Driver for PowerMax](https://hub.docker.com/r/dellemc/csi-powermax/tags) | v2.4.0 | v2.3.0 | v2.2.0 | v2.2.0 | ## CSM Modules Support Matrix for Dell CSI Drivers -| CSM Module | CSI PowerFlex v2.3.0 | CSI PowerScale v2.3.0 | CSI PowerStore v2.3.0 | CSI PowerMax v2.3.0 | CSI Unity XT v2.3.0 | +| CSM Module | CSI PowerFlex v2.4.0 | CSI PowerScale v2.4.0 | CSI PowerStore v2.4.0 | CSI PowerMax v2.4.0 | CSI Unity XT v2.4.0 | | ----------------- | -------------- | --------------- | --------------- | ------------- | --------------- | -| Authorization v1.3| ✔️ | ✔️ | ❌ | ✔️ | ❌ | -| Observability v1.2| ✔️ | ❌ | ✔️ | ❌ | ❌ | +| Authorization v1.4| ✔️ | ✔️ | ❌ | ✔️ | ❌ | +| Observability v1.3| ✔️ | ✔️ | ✔️ | ❌ | ❌ | | Replication v1.3| ❌ | ✔️ | ✔️ | ✔️ | ❌ | -| Resiliency v1.2| ✔️ | ✔️ | ❌ | ❌ | ✔️ | +| Resiliency v1.3| ✔️ | ✔️ | ❌ | ❌ | ✔️ | +| Encryption v0.1.0| ❌ | ✔️ | ❌ | ❌ | ❌ | +| Application Mobility v0.1.0| ✔️ | ✔️ | ✔️ | ✔️ | ✔️ | diff --git a/content/docs/applicationmobility/_index.md b/content/docs/applicationmobility/_index.md new file mode 100644 index 0000000000..af367ffed0 --- /dev/null +++ b/content/docs/applicationmobility/_index.md @@ -0,0 +1,40 @@ +--- +title: "Application Mobility" +linkTitle: "Application Mobility" +weight: 9 +Description: > + Application Mobility +--- + +>> NOTE: This tech-preview release is not intended for use in production environment. + +>> NOTE: Application Mobility requires a time-based license. See [Deployment](./deployment) for instructions. + +Container Storage Modules for Application Mobility provide Kubernetes administrators the ability to clone their stateful application workloads and application data to other clusters, either on-premise or in the cloud. + +Application Mobility uses [Velero](https://velero.io) and its integration of [Restic](https://restic.net) to copy both application metadata and data to object storage. When a backup is requested, Application Mobility uses these options to determine how the application data is backed up: +- If [Volume Group Snapshots](../snapshots/volume-group-snapshots/) are enabled on the CSI driver backing the application's Persistent Volumes, crash consistent snapshots of all volumes are used for the backup. +- If [Volume Snapshots](../snapshots/) are enabled on the Kubernetes cluster and supported by the CSI driver, individual snapshots are used for each Persistent Volume used by the application. +- If no snapshot options are enabled, default to using full copies of each Persistent Volume used by the application. + +After a backup has been created, it can be restored on the same Kubernetes cluster or any other cluster(s) if this criteria is met: +- Application Mobility is installed on the target cluster(s). +- The target cluster(s) has access to the object store bucket. For example, if backing up and restoring an application from an on-premise Kubernetes cluster to AWS EKS, an S3 bucket can be used if both the on-premise and EKS cluster have access to it. +- Storage Class is defined on the target cluster(s) to support creating the required Persistent Volumes used by the application. + +## Supported Data Movers +{{}} +| Data Mover | Description | +|-|-| +| Restic | Persistent Volume data will be stored in the provided object store bucket | +{{
}} + +## Supported Operating Systems/Container Orchestrator Platforms +{{}} +| COP/OS | Supported Versions | +|-|-| +| Kubernetes | 1.23, 1.24 | +| Red Hat OpenShift | 4.10 | +| RHEL | 7.x, 8.x | +| CentOS | 7.8, 7.9 | +{{
}} \ No newline at end of file diff --git a/content/docs/applicationmobility/deployment.md b/content/docs/applicationmobility/deployment.md new file mode 100644 index 0000000000..d5ffb3e8fd --- /dev/null +++ b/content/docs/applicationmobility/deployment.md @@ -0,0 +1,62 @@ +--- +title: "Deployment" +linkTitle: "Deployment" +weight: 1 +Description: > + Deployment +--- + +## Pre-requisites +- [Request a License for Application Mobility](../../license/) +- Object store bucket accessible by both the source and target clusters + +## Installation +1. Create a namespace where Application Mobility will be installed. + ``` + kubectl create ns application-mobility + ``` +2. Edit the license Secret file (see Pre-requisites above) and set the correct namespace (ex: `namespace: application-mobility`) +3. Create the Secret containing a license file + ``` + kubectl apply -f license.yml + ``` +4. Add the Dell Helm Charts repository + ``` + helm repo add dell https://dell.github.io/helm-charts + ``` +5. Either create a values.yml file or provide the `--set` options to the `helm install` to override default values from the [Configuration](#configuration) section. +6. Install the helm chart + ``` + helm install application-mobility -n application-mobility dell/csm-application-mobility + ``` + + +### Configuration + +This table lists the configurable parameters of the Application Mobility Helm chart and their default values. + +| Parameter | Description | Required | Default | +| - | - | - | - | +| `replicaCount` | Number of replicas for the Application Mobility controllers | Yes | `1` | +| `image.pullPolicy` | Image pull policy for the Application Mobility controller images | Yes | `IfNotPresent` | +| `controller.image` | Location of the Application Mobility Docker image | Yes | `dell/csm-application-mobility-controller:v0.1.0` | +| `cert-manager.enabled` | If set to true, cert-manager will be installed during Application Mobility installation | Yes | `false` | +| `veleroNamespace` | If Velero is already installed, set to the namespace where Velero is installed | No | `velero` | +| `licenseName` | Name of the Secret that contains the License for Application Mobility | Yes | `license` | +| `objectstore.secretName` | If velero is already installed on the cluster, specify the name of the secret in velero namespace that has credentials to access object store | No | ` ` | +| `velero.enabled` | If set to true, Velero will be installed during Application Mobility installation | Yes | `true` | +| `velero.use-volume-snapshots` | If set to true, Velero will use volume snapshots | Yes | `false` | +| `velero.deployRestic` | If set to true, Velero will also deploy Restic | Yes | `true` | +| `velero.cleanUpCRDs` | If set to true, Velero CRDs will be cleaned up | Yes | `true` | +| `velero.credentials.existingSecret` | Optionally, specify the name of the pre-created secret in the release namespace that holds the object store credentials. Either this or secretContents should be specified | No | ` ` | +| `velero.credentials.name` | Optionally, specify the name to be used for secret that will be created to hold object store credentials. Used in conjunction with secretContents. | No | ` ` | +| `velero.credentials.secretContents` | Optionally, specify the object store access credentials to be stored in a secret with key "cloud". Either this or existingSecret should be provided. | No | ` ` | +| `velero.configuration.provider` | Provider to use for Velero. | Yes | `aws` | +| `velero.configuration.backupStorageLocation.name` | Name of the backup storage location for Velero. | Yes | `default` | +| `velero.configuration.backupStorageLocation.bucket` | Name of the object store bucket to use for backups. | Yes | `velero-bucket` | +| `velero.configuration.backupStorageLocation.config` | Additional provider-specific configuration. See https://velero.io/docs/v1.9/api-types/backupstoragelocation/ for specific details. | Yes | ` ` | +| `velero.initContainers` | List of plugins used by Velero. Dell Velero plugin is required and plugins for other providers can be added. | Yes | ` ` | +| `velero.initContainers[0].name` | Name of the Dell Velero plugin. | Yes | `dell-custom-velero-plugin` | +| `velero.initContainers[0].image` | Location of the Dell Velero plugin image. | Yes | `dellemc/csm-application-mobility-velero-plugin:v0.1.0` | +| `velero.initContainers[0].volumeMounts[0].mountPath` | Mount path of the volume mount. | Yes | `/target` | +| `velero.initContainers[0].volumeMounts[0].name` | Name of the volume mount. | Yes | `plugins` | \ No newline at end of file diff --git a/content/docs/applicationmobility/release.md b/content/docs/applicationmobility/release.md new file mode 100644 index 0000000000..8baa811262 --- /dev/null +++ b/content/docs/applicationmobility/release.md @@ -0,0 +1,23 @@ +--- +title: "Release Notes" +linkTitle: "Release Notes" +weight: 5 +Description: > + Release Notes +--- + + +## Release Notes - CSM Application Mobility 0.1.0 +### New Features/Changes + +- [Technical preview release](https://github.com/dell/csm/issues/449) +- Clone stateful application workloads and application data to other clusters, either on-premise or in the cloud +- Supports Restic as a data mover for application data + +### Fixed Issues + +There are no fixed issues in this release. + +### Known Issues + +There are no known issues in this release. diff --git a/content/docs/applicationmobility/troubleshooting.md b/content/docs/applicationmobility/troubleshooting.md new file mode 100644 index 0000000000..b015781524 --- /dev/null +++ b/content/docs/applicationmobility/troubleshooting.md @@ -0,0 +1,48 @@ +--- +title: "Troubleshooting" +linkTitle: "Troubleshooting" +weight: 4 +Description: > + Troubleshooting +--- + +## Frequently Asked Questions +1. [How can I diagnose an issue with Application Mobility?](#how-can-i-diagnose-an-issue-with-application-mobility) +2. [How can I view logs?](#how-can-i-view-logs) +3. [How can I debug and troubleshoot issues with Kubernetes?](#how-can-i-debug-and-troubleshoot-issues-with-kubernetes) +4. [Why are there error logs about a license?](#why-are-there-error-logs-about-a-license) + +### How can I diagnose an issue with Application Mobility? + +Once you have attempted to install Application Mobility to your Kubernetes or OpenShift cluster, the first step in troubleshooting is locating the problem. + +Get information on the state of your Pods. +```console +kubectl get pods -n $namespace +``` +Get verbose output of the current state of a Pod. +```console +kubectl describe pod -n $namespace $pod +``` +### How can I view logs? + +View pod container logs. Output logs to a file for further debugging. +```console +kubectl logs -n $namespace $pod $container +kubectl logs -n $namespace $pod $container > $logFileName +``` + +### How can I debug and troubleshoot issues with Kubernetes? + +* To debug your application that may not be behaving correctly, please reference Kubernetes [troubleshooting applications guide](https://kubernetes.io/docs/tasks/debug-application-cluster/debug-application/). + +* For tips on debugging your cluster, please see this [troubleshooting guide](https://kubernetes.io/docs/tasks/debug-application-cluster/debug-cluster/). + +### Why are there error logs about a license? + +Application Mobility requires a license in order to function. See the [Deployment](../deployment) instructions for steps to request a license. + +There will be errors in the logs about the license for these cases: +- License does not exist +- License is not valid for the current Kubernetes cluster +- License has expired \ No newline at end of file diff --git a/content/docs/applicationmobility/uninstallation.md b/content/docs/applicationmobility/uninstallation.md new file mode 100644 index 0000000000..3e98fb7040 --- /dev/null +++ b/content/docs/applicationmobility/uninstallation.md @@ -0,0 +1,17 @@ +--- +title: Uninstallation +linktitle: Uninstallation +weight: 2 +description: > + Uninstallation +--- + +This section outlines the uninstallation steps for Application Mobility. + +## Uninstall the Application Mobility Helm Chart + +This command removes all the Kubernetes components associated with the chart. + +``` +$ helm delete [APPLICATION_MOBILITY_NAME] --namespace [APPLICATION_MOBILITY_NAMESPACE] +``` diff --git a/content/docs/applicationmobility/use_cases.md b/content/docs/applicationmobility/use_cases.md new file mode 100644 index 0000000000..544a3dcd26 --- /dev/null +++ b/content/docs/applicationmobility/use_cases.md @@ -0,0 +1,145 @@ +--- +title: "Use Cases" +linkTitle: "Use Cases" +weight: 3 +Description: > + Use Cases +--- + +After Application Mobility is installed, the [dellctl CLI](../../references/cli/) can be used to register clusters and manage backups and restores of applications. These examples also provide references for using the Application Mobility Custom Resource Definitions (CRDs) to define Custom Resources (CRs) as an alternative to using the `dellctl` CLI. + +## Backup and Restore an Application +This example details the steps when an application in namespace `demo1` is being backed up and then later restored to either the same cluster or another cluster. In this sample, both Application Mobility and Velero are installed in the `application-mobility` namespace. + +1. If Velero is not installed in the default `velero` namespace and `dellctl` is being used, set this environment variable to the namespace where it is installed: + ``` + export VELERO_NAMESPACE=application-mobility + ``` +1. On the source cluster, create a Backup by providing a name and the included namespace where the application is installed. The application and its data will be available in the object store bucket and can be restored at a later time. + + Using dellctl: + ``` + dellctl backup create backup1 --include-namespaces demo1 --namespace application-mobility + ``` + Using Backup Custom Resource: + ``` + apiVersion: mobility.storage.dell.com/v1alpha1 + kind: Backup + metadata: + name: backup1 + namespace: application-mobility + spec: + includedNamespaces: [demo1] + datamover: Restic + clones: [] + ``` +1. Monitor the backup status until it is marked as Completed. + + Using dellctl: + ``` + dellctl backup get --namespace application-mobility + ``` + + Using kubectl: + ``` + kubectl describe backups.mobility.storage.dell.com/backup1 -n application-mobility + ``` + +1. If the Storage Class name on the target cluster is different than the Storage Class name on the source cluster where the backup was created, a mapping between source and target Storage Class names must be defined. See [Changing PV/PVC Storage Classes](#changing-pvpvc-storage-classes). +1. The application and its data can be restored on either the same cluster or another cluster by referring to the backup name and providing an optional mapping of the original namespace to the target namespace. + + Using dellctl: + ``` + dellctl restore create restore1 --from-backup backup1 \ + --namespace-mappings "demo1:restorens1" --namespace application-mobility + ``` + + Using Restore Custom Resource: + ``` + apiVersion: mobility.storage.dell.com/v1alpha1 + kind: Restore + metadata: + name: restore1 + namespace: application-mobility + spec: + backupName: backup1 + namespaceMapping: + "demo1" : "restorens1" + ``` +1. Monitor the restore status until it is marked as Completed. + + Using dellctl: + ``` + dellctl restore get --namespace application-mobility + ``` + + Using kubectl: + ``` + kubectl describe restores.mobility.storage.dell.com/restore1 -n application-mobility + ``` + + +## Clone an Application +This example details the steps when an application in namespace `demo1` is cloned from a source cluster to a target cluster in a single operation. In this sample, both Application Mobility and Velero are installed in the `application-mobility` namespace. + +1. If Velero is not installed in the default `velero` namespace and `dellctl` is being used, set this environment variable to the namespace where it is installed: + ``` + export VELERO_NAMESPACE=application-mobility + ``` +1. Register the target cluster if using `dellctl` + ``` + dellctl cluster add -n targetcluster -u -f ~/kubeconfigs/target-cluster-kubeconfig + ``` +1. If the Storage Class name on the target cluster is different than the Storage Class name on the source cluster where the backup was created, a mapping between source and target Storage Class names must be defined. See [Changing PV/PVC Storage Classes](#changing-pvpvc-storage-classes). +1. Create a Backup by providing a name, the included namespace where the application is installed, and the target cluster and namespace mapping where the application will be restored. + + Using dellctl: + ``` + dellctl backup create backup1 --include-namespaces demo1 --clones "targetcluster/demo1:restore-ns2" \ + --namespace application-mobility + ``` + + Using Backup Custom Resource: + ``` + apiVersion: mobility.storage.dell.com/v1alpha1 + kind: Backup + metadata: + name: backup1 + namespace: application-mobility + spec: + includedNamespaces: [demo1] + datamover: Restic + clones: + - namespaceMapping: + "demo1": "restore-ns2" + restoreOnceAvailable: true + targetCluster: targetcluster + ``` + +1. Monitor the restore status on the target cluster until it is marked as Completed. + + Using dellctl: + ``` + dellctl restore get --namespace application-mobility + ``` + + Using kubectl: + ``` + kubectl get restores.mobility.storage.dell.com -n application-mobility + kubectl describe restores.mobility.storage.dell.com/ -n application-mobility + ``` + +## Changing PV/PVC Storage Classes +Create a ConfigMap on the target cluster in the same namespace where Application Mobility is installed. The data field must contain a mapping of source Storage Class name to target Storage Class name. See Velero's documentation for [Changing PV/PVC Storage Classes](https://velero.io/docs/v1.9/restore-reference/#changing-pvpvc-storage-classes) for additional details. +``` +apiVersion: v1 +kind: ConfigMap +metadata: + name: change-storage-class-config + namespace: + labels: + velero.io/plugin-config: "" + velero.io/change-storage-class: RestoreItemAction +data: + : +``` diff --git a/content/docs/authorization/Backup and Restore/_index.md b/content/docs/authorization/Backup and Restore/_index.md new file mode 100644 index 0000000000..816195bbd7 --- /dev/null +++ b/content/docs/authorization/Backup and Restore/_index.md @@ -0,0 +1,12 @@ +--- +title: Backup and Restore +linktitle: Backup and Restore +weight: 2 +description: Methods to backup and restore CSM Authorization +tags: + - backup + - restore + - csm-authorization +--- + +Backup and Restore information for CSM Authorization can be found in this section. \ No newline at end of file diff --git a/content/docs/authorization/Backup and Restore/helm/_index.md b/content/docs/authorization/Backup and Restore/helm/_index.md new file mode 100644 index 0000000000..7ba38bff0b --- /dev/null +++ b/content/docs/authorization/Backup and Restore/helm/_index.md @@ -0,0 +1,115 @@ +--- +title: Helm +linktitle: Helm +description: > + Dell Technologies (Dell) Container Storage Modules (CSM) for Authorization Helm backup and restore +--- + +## Roles + + +Role data is stored in the `common` Config Map. + +### Steps to execute in the existing Authorization deployment + +1. Save the role data by saving the `common` configMap to a file. + +``` +kubectl -n get configMap common -o yaml > roles.yaml +``` + +### Steps to execute in the Authorization deployment to restore + +1. Delete the existing `common` configMap. + +``` +kubectl -n delete configMap common +``` + +2. Apply the file containing the backed-up role data. + +``` +kubectl apply -f roles.yaml +``` + +3. Restart the `proxy-server` deployment. + +``` +kubectl -n rollout restart deploy/proxy-server +deployment.apps/proxy-server restarted +``` + +## Storage + +Storage data is stored in the `karavi-storage-secret` Secret. + +### Steps to execute in the existing Authorization deployment + +1. Save the storage data by saving the `karavi-storage-secret` Secret to a file. + +``` +kubectl -n get secret karavi-storage-secret -o yaml > storage.yaml +``` + +### Steps to execute in the Authorization deployment to restore + +1. Delete the existing `karavi-storage-secret` secret. + +``` +kubectl -n delete secret karavi-storage-secret +``` + +2. Apply the file containing the storage data created in step 1. + +``` +kubectl apply -f storage.yaml +``` + +3. Restart the `proxy-server` deployment. + +``` +kubectl -n rollout restart deploy/proxy-server +deployment.apps/proxy-server restarted +``` + +## Tenants, Quota, and Volume ownership + +Redis is used to store application data regarding [tenants, quota, and volume ownership](../../design#quota--volume-ownership) with the Storage Class specified in the `redis.storageClass` parameter in the values file, or with the default Storage Class if that parameter was not specified. + +The Persistent Volume for Redis is dynamically provisioned by this Storage Class with the `redis-primary-pv-claim` Persistent Volume Claim. See the example. + +``` +kubectl get persistentvolume +NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE +k8s-ab74921ab9 8Gi RWO Delete Bound authorization/redis-primary-pv-claim 112m +``` + +### Steps to execute in the existing Authorization deployment + +1. Create a backup of this volume, typically via snapshot and/or replication, and create a Persistent Volume Claim using this backup by following the Storage Class's provisioner documentation. + +### Steps to execute in the Authorization deployment to restore + +1. Edit the `redis-primary` Deployment to use the Persistent Volume Claim associated with the backup by running: + +`kubectl -n edit deploy/redis-primary` + +The Deployment has a volumes field that should look like this: + +``` +volumes: +- name: redis-primary-volume + persistentVolumeClaim: + claimName: redis-primary-pv-claim +``` + +Replace the value of `claimName` with the name of the Persisent Volume Claim associated with the backup. If the new Persisent Volume Claim name is `redis-backup`, you would edit the deployment to look like this: + +``` +volumes: +- name: redis-primary-volume + persistentVolumeClaim: + claimName: redis-backup +``` + +Once saved, Redis will now use the backup volume. \ No newline at end of file diff --git a/content/docs/authorization/Backup and Restore/rpm/_index.md b/content/docs/authorization/Backup and Restore/rpm/_index.md new file mode 100644 index 0000000000..4821c6b89c --- /dev/null +++ b/content/docs/authorization/Backup and Restore/rpm/_index.md @@ -0,0 +1,121 @@ +--- +title: RPM +linktitle: RPM +description: > + Dell Technologies (Dell) Container Storage Modules (CSM) for Authorization RPM backup and restore +--- + +## Roles + +Role data is stored in the `common` Config Map in the underlying `k3s` deployment. + +### Steps to execute in the existing Authorization deployment + +1. Save the role data by saving the `common` configMap to a file. + +``` +k3s kubectl -n karavi get configMap common -o yaml > roles.yaml +``` + +### Steps to execute in the Authorization deployment to restore + +1. Delete the existing `common` configMap. + +``` +k3s kubectl -n karavi delete configMap common +``` + +2. Apply the file containing the role data created in step 1. + +``` +k3s kubectl apply -f roles.yaml +``` + +3. Restart the `proxy-server` deployment. + +``` +k3s kubectl -n karavi rollout restart deploy/proxy-server +deployment.apps/proxy-server restarted +``` + +## Storage + +Storage data is stored in the `karavi-storage-secret` Secret in the underlying `k3s` deployment. + +### Steps to execute in the existing Authorization deployment + +1. Save the storage data by saving the `karavi-storage-secret` secret to a file. + +``` +k3s kubectl -n karavi get secret karavi-storage-secret -o yaml > storage.yaml +``` + +### Steps to execute in the Authorization deployment to restore + +1. Delete the existing `karavi-storage-secret` secret. + +``` +k3s kubectl -n karavi delete secret karavi-storage-secret +``` + +2. Apply the file containing the storage data created in step 1. + +``` +k3s kubectl apply -f storage.yaml +``` + +3. Restart the `proxy-server` deployment. + +``` +k3s kubectl -n karavi rollout restart deploy/proxy-server +deployment.apps/proxy-server restarted +``` + +## Tenants, Quota, and Volume ownership + +Redis is used to store application data regarding [tenants, quota, and volume ownership](../../design#quota--volume-ownership). This data is stored on the system under `/var/lib/rancher/k3s/storage//appendonly.aof`. + +`appendonly.aof` can be copied and used to restore this appliation data in Authorization deployments. See the example. + +### Steps to execute in the existing Authorization deployment + +1. Determine the Persistent Volume related to the `redis-primary-pv-claim` Persistent Volume Claim. + +``` +k3s kubectl -n karavi get pvc +NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE +redis-primary-pv-claim Bound pvc-12d8cc05-910d-45bd-9f30-f6807b287a69 8Gi RWO local-path 65m +``` + +The Persistent Volume related to the `redis-primary-pv-claim` Persistent Volume Claim is `pvc-12d8cc05-910d-45bd-9f30-f6807b287a69`. + +2. Copy `appendonly.aof` from the appropriate path to another location. + +``` +cp /var/lib/rancher/k3s/storage/pvc-12d8cc05-910d-45bd-9f30-f6807b287a69/appendonly.aof /path/to/copy/appendonly.aof +``` + +### Steps to execute in the Authorization deployment to restore + +1. Determine the Persistent Volume related to the `redis-primary-pv-claim` Persistent Volume Claim. + +``` +k3s kubectl -n karavi get pvc +NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE +redis-primary-pv-claim Bound pvc-e7ea31bf-3d79-41fc-88d8-50ba356a298b 8Gi RWO local-path 65m +``` + +The Persistent Volume related to the `redis-primary-pv-claim` Persistent Volume Claim is `pvc-e7ea31bf-3d79-41fc-88d8-50ba356a298b`. + +2. Copy/Overwrite the `appendonly.aof` in the appropriate path using the file copied in step 2. + +``` +cp /path/to/copy/appendonly.aof /var/lib/rancher/k3s/storage/pvc-e7ea31bf-3d79-41fc-88d8-50ba356a298b/appendonly.aof +``` + +3. Restart the `redis-primary` deployment. + +``` +k3s kubectl -n karavi rollout restart deploy/redis-primary +deployment.apps/redis-primary restarted +``` diff --git a/content/docs/authorization/_index.md b/content/docs/authorization/_index.md index 744d4918eb..bfa01e9cd3 100644 --- a/content/docs/authorization/_index.md +++ b/content/docs/authorization/_index.md @@ -43,7 +43,7 @@ The following diagram shows a high-level overview of CSM for Authorization with {{}} | | PowerMax | PowerFlex | PowerScale | |---------------|:----------------:|:-------------------:|:----------------:| -| Storage Array |5978.479.479, 5978.711.711, Unisphere 9.2| 3.5.x, 3.6.x | OneFS 8.1, 8.2, 9.0, 9.1, 9.2, 9.3 | +| Storage Array |5978.479.479, 5978.711.711, 6079.xxx.xxx, Unisphere 10.0| 3.5.x, 3.6.x | OneFS 8.1, 8.2, 9.0, 9.1, 9.2, 9.3 | {{
}} ## Supported CSI Drivers @@ -69,6 +69,7 @@ CSM for Authorization consists of 2 components - the Authorization sidecar and t | dellemc/csm-authorization-sidecar:v1.0.0 | v1.0.0, v1.1.0 | | dellemc/csm-authorization-sidecar:v1.2.0 | v1.1.0, v1.2.0 | | dellemc/csm-authorization-sidecar:v1.3.0 | v1.1.0, v1.2.0, v1.3.0 | +| dellemc/csm-authorization-sidecar:v1.4.0 | v1.1.0, v1.2.0, v1.3.0, v1.4.0 | {{}} ## Roles and Responsibilities diff --git a/content/docs/authorization/cli.md b/content/docs/authorization/cli.md index b282d7c3fd..cb0b5242fc 100644 --- a/content/docs/authorization/cli.md +++ b/content/docs/authorization/cli.md @@ -256,6 +256,8 @@ karavictl role get [flags] ``` -h, --help help for get + --insecure insecure skip verify flag for Helm deployment + --addr address of the container for Helm deployment (pod:port) ``` ##### Options inherited from parent commands @@ -303,6 +305,8 @@ karavictl role list [flags] ``` -h, --help help for list + --insecure insecure skip verify flag for Helm deployment + --addr address of the container for Helm deployment (pod:port) ``` ##### Options inherited from parent commands @@ -365,6 +369,8 @@ karavictl role create [flags] ``` -f, --from-file string role data from a file --role strings role in the form ==== + --insecure insecure skip verify flag for Helm deployment + --addr address of the container for Helm deployment (pod:port) -h, --help help for create ``` @@ -411,6 +417,8 @@ karavictl role update [flags] ``` -f, --from-file string role data from a file --role strings role in the form ==== + --insecure insecure skip verify flag for Helm deployment + --addr address of the container for Helm deployment (pod:port) -h, --help help for update ``` @@ -452,6 +460,8 @@ karavictl role delete [flags] ``` -h, --help help for delete + --insecure insecure skip verify flag for Helm deployment + --addr address of the container for Helm deployment (pod:port) ``` ##### Options inherited from parent commands @@ -523,8 +533,9 @@ karavictl rolebinding create [flags] ``` -h, --help help for create - -r, --role string Role name - -t, --tenant string Tenant name + -r, --role string Role name + -t, --tenant string Tenant name + --insecure boolean insecure skip verify flag for Helm deployment ``` ##### Options inherited from parent commands @@ -562,8 +573,9 @@ karavictl rolebinding delete [flags] ``` -h, --help help for create - -r, --role string Role name - -t, --tenant string Tenant name + -r, --role string Role name + -t, --tenant string Tenant name + --insecure boolean insecure skip verify flag for Helm deployment ``` ##### Options inherited from parent commands @@ -638,6 +650,8 @@ karavictl storage get [flags] -h, --help help for get -s, --system-id string System identifier (default "systemid") -t, --type string Type of storage system ("powerflex", "powermax") + --insecure insecure skip verify flag for Helm deployment + --addr address of the container for Helm deployment (pod:port) ``` ##### Options inherited from parent commands @@ -680,6 +694,8 @@ karavictl storage list [flags] ``` -h, --help help for list + --insecure insecure skip verify flag for Helm deployment + --addr address of the container for Helm deployment (pod:port) ``` ##### Options inherited from parent commands @@ -730,11 +746,13 @@ karavictl storage create [flags] ``` -e, --endpoint string Endpoint of REST API gateway -h, --help help for create - -i, --insecure Insecure skip verify - -p, --password string Password (default "****") + -a, --array-insecure Array insecure skip verify + -p, --password string Password (default "****") -s, --system-id string System identifier (default "systemid") -t, --type string Type of storage system ("powerflex", "powermax") -u, --user string Username (default "admin") + --insecure insecure skip verify flag for Helm deployment + --addr address of the container for Helm deployment (pod:port) ``` ##### Options inherited from parent commands @@ -746,7 +764,7 @@ karavictl storage create [flags] ##### Output ``` -$ karavictl storage create --endpoint https://1.1.1.1 --insecure --system-id 3000000000011111 --type powerflex --user admin --password ******** +$ karavictl storage create --endpoint https://1.1.1.1 --insecure --array-insecure --system-id 3000000000011111 --type powerflex --user admin --password ******** ``` On success, there will be no output. You may run `karavictl storage get --type --system-id ` to confirm the creation occurred. @@ -772,11 +790,13 @@ karavictl storage update [flags] ``` -e, --endpoint string Endpoint of REST API gateway -h, --help help for update - -i, --insecure Insecure skip verify + -a, --array-insecure Array insecure skip verify -p, --pass string Password (default "****") -s, --system-id string System identifier (default "systemid") -t, --type string Type of storage system ("powerflex", "powermax") -u, --user string Username (default "admin") + --insecure insecure skip verify flag for Helm deployment + --addr address of the container for Helm deployment (pod:port) ``` ##### Options inherited from parent commands @@ -788,7 +808,7 @@ karavictl storage update [flags] ##### Output ``` -$ karavictl storage update --endpoint https://1.1.1.1 --insecure --system-id 3000000000011111 --type powerflex --user admin --password ******** +$ karavictl storage update --endpoint https://1.1.1.1 --insecure --array-insecure --system-id 3000000000011111 --type powerflex --user admin --password ******** ``` On success, there will be no output. You may run `karavictl storage get --type --system-id ` to confirm the update occurred. @@ -816,6 +836,8 @@ karavictl storage delete [flags] -h, --help help for delete -s, --system-id string System identifier (default "systemid") -t, --type string Type of storage system ("powerflex", "powermax") + --insecure insecure skip verify flag for Helm deployment + --addr address of the container for Helm deployment (pod:port) ``` ##### Options inherited from parent commands @@ -887,6 +909,7 @@ karavictl tenant create [flags] ``` -h, --help help for create -n, --name string Tenant name + --insecure insecure skip verify flag for Helm deployment ``` ##### Options inherited from parent commands @@ -926,6 +949,7 @@ karavictl tenant get [flags] ``` -h, --help help for create -n, --name string Tenant name + --insecure insecure skip verify flag for Helm deployment ``` ##### Options inherited from parent commands @@ -969,6 +993,7 @@ karavictl tenant list [flags] ``` -h, --help help for create + --insecure insecure skip verify flag for Helm deployment ``` ##### Options inherited from parent commands @@ -1016,6 +1041,7 @@ karavictl tenant revoke [flags] ``` -h, --help help for create -n, --name string Tenant name + --insecure insecure skip verify flag for Helm deployment ``` ##### Options inherited from parent commands @@ -1054,6 +1080,7 @@ karavictl tenant delete [flags] ``` -h, --help help for create -n, --name string Tenant name + --insecure insecure skip verify flag for Helm deployment ``` ##### Options inherited from parent commands diff --git a/content/docs/authorization/deployment/helm/_index.md b/content/docs/authorization/deployment/helm/_index.md index 76d0f47c1a..f119688720 100644 --- a/content/docs/authorization/deployment/helm/_index.md +++ b/content/docs/authorization/deployment/helm/_index.md @@ -13,7 +13,7 @@ The following CSM Authorization components are installed in the specified namesp - role-service, which configures roles for tenants to be bound to - storage-service, which configures backend storage arrays for the proxy-server to foward requests to -The folloiwng third-party components are installed in the specified namespace: +The following third-party components are installed in the specified namespace: - redis, which stores data regarding tenants and their volume ownership, quota, and revokation status - redis-commander, a web management tool for Redis @@ -47,7 +47,7 @@ The following third-party components are optionally installed in the specified n Use the following command to replace or update the secret: - `kubectl create secret generic karavi-config-secret -n authorization --from-file=config=samples/csm-authorization/config.yaml -o yaml --dry-run=client | kubectl replace -f -` + `kubectl create secret generic karavi-config-secret -n authorization --from-file=config.yaml=samples/csm-authorization/config.yaml -o yaml --dry-run=client | kubectl replace -f -` 4. Copy the default values.yaml file `cp charts/csm-authorization/values.yaml myvalues.yaml` @@ -108,9 +108,26 @@ helm -n authorization install authorization -f myvalues.yaml charts/csm-authoriz ## Install Karavictl -The Karavictl CLI can be obtained directly from the [GitHub repository's releases](https://github.com/dell/karavi-authorization/releases) section. +1. Download the latest release of karavictl -In order to run `karavictl` commands, the binary needs to exist in your PATH, for example /usr/local/bin. +``` +curl -LO https://github.com/dell/karavi-authorization/releases/latest/download/karavictl +``` + +2. Install karavictl + +``` +sudo install -o root -g root -m 0755 karavictl /usr/local/bin/karavictl +``` + +If you do not have root access on the target system, you can still install karavictl to the ~/.local/bin directory: + +``` +chmod +x karavictl +mkdir -p ~/.local/bin +mv ./karavictl ~/.local/bin/karavictl +# and then append (or prepend) ~/.local/bin to $PATH +``` Karavictl commands and intended use can be found [here](../../cli/). @@ -208,17 +225,17 @@ karavictl rolebinding create --tenant Finance --role FinanceRole --insecure --ad Now that the tenant is bound to a role, a JSON Web Token can be generated for the tenant. For example, to generate a token for the `Finance` tenant: ``` -karavictl generate token --tenant Finance --insecure --addr --addr tenant.csm-authorization.com:30016 +karavictl generate token --tenant Finance --insecure --addr tenant.csm-authorization.com:30016 { "Token": "\napiVersion: v1\nkind: Secret\nmetadata:\n name: proxy-authz-tokens\ntype: Opaque\ndata:\n access: ZXlKaGJHY2lPaUpJVXpJMU5pSXNJblI1Y0NJNklrcFhWQ0o5LmV5SmhkV1FpT2lKcllYSmhkbWtpTENKbGVIQWlPakUyTlRNek1qUXhPRFlzSW1keWIzVndJam9pWm05dklpd2lhWE56SWpvaVkyOXRMbVJsYkd3dWEyRnlZWFpwSWl3aWNtOXNaWE1pT2lKaVlYSWlMQ0p6ZFdJaU9pSnJZWEpoZG1rdGRHVnVZVzUwSW4wLmJIODN1TldmaHoxc1FVaDcweVlfMlF3N1NTVnEyRzRKeGlyVHFMWVlEMkU=\n refresh: ZXlKaGJHY2lPaUpJVXpJMU5pSXNJblI1Y0NJNklrcFhWQ0o5LmV5SmhkV1FpT2lKcllYSmhkbWtpTENKbGVIQWlPakUyTlRVNU1UWXhNallzSW1keWIzVndJam9pWm05dklpd2lhWE56SWpvaVkyOXRMbVJsYkd3dWEyRnlZWFpwSWl3aWNtOXNaWE1pT2lKaVlYSWlMQ0p6ZFdJaU9pSnJZWEpoZG1rdGRHVnVZVzUwSW4wLkxNbWVUSkZlX2dveXR0V0lUUDc5QWVaTy1kdmN5SHAwNUwyNXAtUm9ZZnM=\n" } ``` -With [jq](https://stedolan.github.io/jq/), you process the above response to filter the secret manifest. For example: +Process the above response to filter the secret manifest. For example using sed you can run the following: ``` -karavictl generate token --tenant Finance --insecure --addr --addr tenant.csm-authorization.com:30016 | jq -r '.Token' +karavictl generate token --tenant Finance --insecure --addr tenant.csm-authorization.com:30016 | sed -e 's/"Token": //' -e 's/[{}"]//g' -e 's/\\n/\n/g' apiVersion: v1 kind: Secret metadata: @@ -257,7 +274,7 @@ Given a setup where Kubernetes, a storage system, and the CSM for Authorization | intendedEndpoint | HTTPS REST API endpoint of the backend storage array. | Yes | - | | endpoint | HTTPS localhost endpoint that the authorization sidecar will listen on. | Yes | https://localhost:9400 | | systemID | System ID of the backend storage array. | Yes | " " | - | insecure | A boolean that enables/disables certificate validation of the backend storage array. This parameter is not used. | No | true | + | skipCertificateValidation | A boolean that enables/disables certificate validation of the backend storage array. This parameter is not used. | No | true | | isDefault | A boolean that indicates if the array is the default array. This parameter is not used. | No | default value from values.yaml | diff --git a/content/docs/authorization/deployment/rpm/_index.md b/content/docs/authorization/deployment/rpm/_index.md index 3c037dad45..9e9d413db2 100644 --- a/content/docs/authorization/deployment/rpm/_index.md +++ b/content/docs/authorization/deployment/rpm/_index.md @@ -19,7 +19,29 @@ The CSM for Authorization proxy server requires a Linux host with the following These packages need to be installed on the Linux host: - container-selinux -- https://rpm.rancher.io/k3s/stable/common/centos/7/noarch/k3s-selinux-0.4-1.el7.noarch.rpm +- k3s-selinux-0.4-1 + +Use the appropriate package manager on the machine to install the packages. + +### Using yum on CentOS/RedHat 7: + +yum install -y container-selinux + +yum install -y https://rpm.rancher.io/k3s/stable/common/centos/7/noarch/k3s-selinux-0.4-1.el7.noarch.rpm + +### Using yum on CentOS/RedHat 8: + +yum install -y container-selinux + +yum install -y https://rpm.rancher.io/k3s/stable/common/centos/8/noarch/k3s-selinux-0.4-1.el8.noarch.rpm + +### Dark Sites + +For environments where `yum` will not work, obtain the supported version of container-selinux for your OS version and install it. + +The container-selinux RPMs for CentOS/RedHat 7 and 8 can be downloaded from [https://centos.pkgs.org/7/centos-extras-x86_64/](https://centos.pkgs.org/7/centos-extras-x86_64/) and [https://centos.pkgs.org/8/centos-appstream-x86_64/](https://centos.pkgs.org/8/centos-appstream-x86_64/), respectively. + +The k3s-selinux-0.4-1 RPM can be obtained from [https://rpm.rancher.io/k3s/stable/common/centos/7/noarch/k3s-selinux-0.4-1.el7.noarch.rpm](https://rpm.rancher.io/k3s/stable/common/centos/7/noarch/k3s-selinux-0.4-1.el7.noarch.rpm) or [https://rpm.rancher.io/k3s/stable/common/centos/8/noarch/k3s-selinux-0.4-1.el8.noarch.rpm](https://rpm.rancher.io/k3s/stable/common/centos/8/noarch/k3s-selinux-0.4-1.el8.noarch.rpm) for CentOS/RedHat 7 and 8, respectively. Download the supported version of k3s-selinux-0.4-1 for your OS version and install it. ## Deploying the CSM Authorization Proxy Server @@ -188,7 +210,7 @@ After creating the role bindings, the next logical step is to generate the acces ``` echo === Generating token === - karavictl generate token --tenant $TenantName --insecure --addr "grpc.${AuthorizationProxyHost}" | jq -r '.Token' > token.yaml + karavictl generate token --tenant $TenantName --insecure --addr "grpc.${AuthorizationProxyHost}" | sed -e 's/"Token": //' -e 's/[{}"]//g' -e 's/\\n/\n/g' > token.yaml echo === Copy token to Driver Host === sshpass -p ${DriverHostPassword} scp token.yaml ${DriverHostVMUser}@{DriverHostVMIP}:/tmp/token.yaml @@ -230,7 +252,7 @@ Given a setup where Kubernetes, a storage system, and the CSM for Authorization | intendedEndpoint | HTTPS REST API endpoint of the backend storage array. | Yes | - | | endpoint | HTTPS localhost endpoint that the authorization sidecar will listen on. | Yes | https://localhost:9400 | | systemID | System ID of the backend storage array. | Yes | " " | - | insecure | A boolean that enables/disables certificate validation of the backend storage array. This parameter is not used. | No | true | + | skipCertificateValidation | A boolean that enables/disables certificate validation of the backend storage array. This parameter is not used. | No | true | | isDefault | A boolean that indicates if the array is the default array. This parameter is not used. | No | default value from values.yaml | @@ -330,7 +352,7 @@ Replace the data in `config.yaml` under the `data` field with your new, encoded >__Note__: If you are updating the signing secret, the tenants need to be updated with new tokens via the `karavictl generate token` command like so. The `--insecure` flag is only necessary if certificates were not provided in `$HOME/.karavi/config.json` -`karavictl generate token --tenant $TenantName --insecure --addr "grpc.${AuthorizationProxyHost}" | jq -r '.Token' > kubectl -n $namespace apply -f -` +`karavictl generate token --tenant $TenantName --insecure --addr "grpc.${AuthorizationProxyHost}" | sed -e 's/"Token": //' -e 's/[{}"]//g' -e 's/\\n/\n/g' | kubectl -n $namespace apply -f -` ## CSM for Authorization Proxy Server Dynamic Configuration Settings diff --git a/content/docs/authorization/release/_index.md b/content/docs/authorization/release/_index.md index 9e877ab1b9..4352059fbe 100644 --- a/content/docs/authorization/release/_index.md +++ b/content/docs/authorization/release/_index.md @@ -6,19 +6,22 @@ Description: > Dell Container Storage Modules (CSM) release notes for authorization --- -## Release Notes - CSM Authorization 1.3.0 +## Release Notes - CSM Authorization 1.4.0 ### New Features/Changes -- [CSM-Authorization can deployed with helm](https://github.com/dell/csm/issues/261) - -### Fixed Issues - -- [Authorization proxy server install fails due to missing container-selinux](https://github.com/dell/csm/issues/313) -- [Permissions on karavictl and k3s binaries are incorrect](https://github.com/dell/csm/issues/277) - - - -### Known Issues - -- [Authorization NGINX Ingress Controller fails to install on OpenShift](https://github.com/dell/csm/issues/317) \ No newline at end of file +- CSM 1.4 Release specific changes. ([#350](https://github.com/dell/csm/issues/350)) +- CSM Authorization insecure related entities are renamed to skipCertificateValidation. ([#368](https://github.com/dell/csm/issues/368)) + +### Bugs + +- PowerScale volumes unable to be created with Helm deployment of CSM Authorization. ([#419](https://github.com/dell/csm/issues/419)) +- Authorization CLI documentation does not mention --array-insecure flag when creating or updating storage systems. ([#416](https://github.com/dell/csm/issues/416)) +- Authorization: Add documentation for backing up and restoring redis data. ([#410](https://github.com/dell/csm/issues/410)) +- CSM Authorization doesn't recognize storage with capital letters. ([#398](https://github.com/dell/csm/issues/398)) +- Update Authorization documentation with supported versions of k3s-selinux and container-selinux packages. ([#393](https://github.com/dell/csm/issues/393)) +- Using Authorization without dependency on jq. ([#390](https://github.com/dell/csm/issues/390)) +- Authorization Documentation Improvement. ([#384](https://github.com/dell/csm/issues/384)) +- Unit test failing for csm-authorization. ([#382](https://github.com/dell/csm/issues/382)) +- Karavictl has incorrect permissions after download. ([#360](https://github.com/dell/csm/issues/360)) +- Helm deployment of Authorization denies a valid request path from csi-powerflex. ([#353](https://github.com/dell/csm/issues/353)) \ No newline at end of file diff --git a/content/docs/csidriver/_index.md b/content/docs/csidriver/_index.md index 732f364787..8115d3840d 100644 --- a/content/docs/csidriver/_index.md +++ b/content/docs/csidriver/_index.md @@ -23,18 +23,17 @@ The CSI Drivers by Dell implement an interface between [CSI](https://kubernetes- | SLES | 15SP3 | 15SP3 | 15SP3 | 15SP3 | 15SP3 | | Red Hat OpenShift | 4.9, 4.10, 4.10 EUS | 4.9, 4.10, 4.10 EUS | 4.9, 4.10, 4.10 EUS | 4.9, 4.10, 4.10 EUS | 4.9, 4.10, 4.10 EUS | | Mirantis Kubernetes Engine | 3.5.x | 3.5.x | 3.5.x | 3.5.x | 3.5.x | -| Google Anthos | 1.9 | 1.8 | no | 1.9 | 1.9 | -| VMware Tanzu | no | no | NFS | NFS | NFS | +| Google Anthos | 1.12 | 1.12 | no | 1.12 | 1.12 | +| VMware Tanzu | no | no | NFS | NFS | NFS,iSCSI | | Rancher Kubernetes Engine | yes | yes | yes | yes | yes | | Amazon Elastic Kubernetes Service
Anywhere | no | yes | no | no | yes | - {{}} ### CSI Driver Capabilities {{}} | Features | PowerMax | PowerFlex | Unity XT | PowerScale | PowerStore | |--------------------------|:--------:|:---------:|:---------:|:----------:|:----------:| -| CSI Driver version | 2.3.0 | 2.3.0 | 2.3.0 | 2.3.0 | 2.3.0 | +| CSI Driver version | 2.4.0 | 2.4.0 | 2.4.0 | 2.4.0 | 2.4.0 | | Static Provisioning | yes | yes | yes | yes | yes | | Dynamic Provisioning | yes | yes | yes | yes | yes | | Expand Persistent Volume | yes | yes | yes | yes | yes | @@ -53,7 +52,7 @@ The CSI Drivers by Dell implement an interface between [CSI](https://kubernetes- {{
}} | | PowerMax | PowerFlex | Unity XT | PowerScale | PowerStore | |---------------|:-------------------------------------------------------:|:----------------:|:--------------------------:|:----------------------------------:|:----------------:| -| Storage Array |5978.479.479, 5978.711.711
Unisphere 9.2| 3.5.x, 3.6.x | 5.0.7, 5.1.0, 5.1.2 | OneFS 8.1, 8.2, 9.0, 9.1, 9.2, 9.3, 9.4 | 1.0.x, 2.0.x, 2.1.x, 3.0 | +| Storage Array |5978.479.479, 5978.711.711, 6079.xxx.xxx
Unisphere 10.0 | 3.5.x, 3.6.x | 5.0.7, 5.1.0, 5.1.2, 5.2.0 | OneFS 8.1, 8.2, 9.0, 9.1, 9.2, 9.3, 9.4 | 1.0.x, 2.0.x, 2.1.x, 3.0 | {{
}} ### Backend Storage Details {{}} @@ -68,4 +67,4 @@ The CSI Drivers by Dell implement an interface between [CSI](https://kubernetes- | Supported FS | ext4 / xfs | ext4 / xfs | ext3 / ext4 / xfs / NFS | NFS | ext3 / ext4 / xfs / NFS | | Thin / Thick provisioning | Thin | Thin | Thin/Thick | N/A | Thin | | Platform-specific configurable settings | Service Level selection
iSCSI CHAP | - | Host IO Limit
Tiering Policy
NFS Host IO size
Snapshot Retention duration | Access Zone
NFS version (3 or 4);Configurable Export IPs | iSCSI CHAP | -{{
}} \ No newline at end of file +{{}} diff --git a/content/docs/csidriver/features/powerflex.md b/content/docs/csidriver/features/powerflex.md index cfc331a718..f39abd8d26 100644 --- a/content/docs/csidriver/features/powerflex.md +++ b/content/docs/csidriver/features/powerflex.md @@ -522,6 +522,69 @@ Then run: this test deploys the pod with two ephemeral volumes, and write some data to them before deleting the pod. When creating ephemeral volumes, it is important to specify the following within the volumeAttributes section: volumeName, size, storagepool, and if you want to use a non-default array, systemID. +## Consuming Existing Volumes with Static Provisioning + +To use existing volumes from PowerFlex array as Peristent volumes in your Kubernetes environment, perform these steps: +1. Log into one of the MDMs of the PowerFlex cluster. +2. Execute these commands to retrieve the `systemID` and `volumeID`. + 1. `scli --mdm_ip --login --username --password ` + - **Output:** `Logged in. User role is SuperUser. System ID is ` + 2. `scli --query_volume --volume_name ` + - **Output:** `Volume ID: Name: ` +3. Create PersistentVolume and use this volume ID in the volumeHandle with the format `systemID`-`volumeID` in the manifest. Modify other parameters according to your needs. +```yaml +apiVersion: v1 +kind: PersistentVolume +metadata: + name: existingVol +spec: + capacity: + storage: 8Gi + csi: + driver: csi-vxflexos.dellemc.com + volumeHandle: - + volumeMode: Filesystem + accessModes: + - ReadWriteOnce + storageClassName: vxflexos +``` +4. Create PersistentVolumeClaim to use this PersistentVolume. +```yaml +kind: PersistentVolumeClaim +apiVersion: v1 +metadata: + name: pvol +spec: + accessModes: + - ReadWriteOnce + volumeMode: Filesystem + resources: + requests: + storage: 8Gi + storageClassName: vxflexos +``` +5. Then use this PVC as a volume in a pod. +```yaml +apiVersion: v1 +kind: Pod +metadata: + name: static-prov-pod +spec: + containers: + - name: test + image: busybox + command: [ "sleep", "3600" ] + volumeMounts: + - mountPath: "/data0" + name: pvol + volumes: + - name: pvol + persistentVolumeClaim: + claimName: pvol +``` +6. After the pod is `Ready` and `Running`, you can start to use this pod and volume. + +**Note:** Retrieval of the volume ID is possible through the UI. You must select the volume, navigate to `Details` section and click the volume in the graph. This selection will set the filter to the desired volume. At this point the volume ID can be found in the URL. ## Dynamic Logging Configuration diff --git a/content/docs/csidriver/features/powermax.md b/content/docs/csidriver/features/powermax.md index 697c1040b1..36be5f0745 100644 --- a/content/docs/csidriver/features/powermax.md +++ b/content/docs/csidriver/features/powermax.md @@ -160,8 +160,6 @@ To install multiple CSI drivers, follow these steps: Starting in v1.4, the CSI PowerMax driver supports the expansion of Persistent Volumes (PVs). This expansion is done online, which is when the PVC is attached to any node. ->Note: This feature is not supported for replicated volumes. - To use this feature, enable in `values.yaml` ```yaml @@ -564,4 +562,4 @@ spec: When this feature is enabled, the existing `ReadWriteOnce(RWO)` access mode restricts volume access to a single node and allows multiple pods on the same node to read from and write to the same volume. -To migrate existing PersistentVolumes to use `ReadWriteOncePod`, please follow the instruction from [here](https://kubernetes.io/blog/2021/09/13/read-write-once-pod-access-mode-alpha/#migrating-existing-persistentvolumes). \ No newline at end of file +To migrate existing PersistentVolumes to use `ReadWriteOncePod`, please follow the instruction from [here](https://kubernetes.io/blog/2021/09/13/read-write-once-pod-access-mode-alpha/#migrating-existing-persistentvolumes). diff --git a/content/docs/csidriver/features/powerscale.md b/content/docs/csidriver/features/powerscale.md index acaee8b878..085ee57ffd 100644 --- a/content/docs/csidriver/features/powerscale.md +++ b/content/docs/csidriver/features/powerscale.md @@ -22,6 +22,9 @@ You can use existent volumes from the PowerScale array as Persistent Volumes in 1. Open your volume in One FS, and take a note of volume-id. 2. Create PersistentVolume and use this volume-id as a volumeHandle in the manifest. Modify other parameters according to your needs. 3. In the following example, the PowerScale cluster accessZone is assumed as 'System', storage class as 'isilon', cluster name as 'pscale-cluster' and volume's internal name as 'isilonvol'. The volume-handle should be in the format of =_=_==_=_==_=_= +4. If Quotas are enabled in the driver, it is required to add the Quota ID to the description of the NFS export in this format: +`CSI_QUOTA_ID:sC-kAAEAAAAAAAAAAAAAQEpVAAAAAAAA` +5. Quota ID can be identified by querying the PowerScale system. ```yaml apiVersion: v1 diff --git a/content/docs/csidriver/features/powerstore.md b/content/docs/csidriver/features/powerstore.md index e4a3103b11..df8ab6544e 100644 --- a/content/docs/csidriver/features/powerstore.md +++ b/content/docs/csidriver/features/powerstore.md @@ -188,7 +188,7 @@ provisioner: csi-powerstore.dellemc.com reclaimPolicy: Delete allowVolumeExpansion: true # Set this attribute to true if you plan to expand any PVCs created using this storage class parameters: - FsType: xfs + csi.storage.k8s.io/fstype: xfs ``` To resize a PVC, edit the existing PVC spec and set spec.resources.requests.storage to the intended size. For example, if you have a PVC pstore-pvc-demo of size 3Gi, then you can resize it to 30Gi by updating the PVC. @@ -494,7 +494,7 @@ allowVolumeExpansion: true volumeBindingMode: WaitForFirstConsumer parameters: arrayID: "GlobalUniqueID" - FsType: "ext4" + csi.storage.k8s.io/fstype: "ext4" --- apiVersion: storage.k8s.io/v1 kind: StorageClass @@ -506,7 +506,7 @@ allowVolumeExpansion: true volumeBindingMode: WaitForFirstConsumer parameters: arrayID: "GlobalUniqueID" - FsType: "xfs" + csi.storage.k8s.io/fstype: "xfs" ``` Here we specify two storage classes: one of them uses the first array and `ext4` filesystem, and the other uses the second array and `xfs` filesystem. diff --git a/content/docs/csidriver/installation/helm/isilon.md b/content/docs/csidriver/installation/helm/isilon.md index d1ba801503..3488f66182 100644 --- a/content/docs/csidriver/installation/helm/isilon.md +++ b/content/docs/csidriver/installation/helm/isilon.md @@ -26,6 +26,7 @@ The following are requirements to be met before installing the CSI Driver for De - If enabling CSM for Authorization, please refer to the [Authorization deployment steps](../../../../authorization/deployment/) first - If enabling CSM for Replication, please refer to the [Replication deployment steps](../../../../replication/deployment/) first - If enabling CSM for Resiliency, please refer to the [Resiliency deployment steps](../../../../resiliency/deployment/) first +- If enabling Encryption, please refer to the [Encryption deployment steps](../../../../secure/encryption/deployment/) first ### Install Helm 3.0 @@ -46,14 +47,14 @@ controller: ``` #### Volume Snapshot CRD's -The Kubernetes Volume Snapshot CRDs can be obtained and installed from the external-snapshotter project on Github. Manifests are available here:[v5.0.x](https://github.com/kubernetes-csi/external-snapshotter/tree/v5.0.1/client/config/crd) +The Kubernetes Volume Snapshot CRDs can be obtained and installed from the external-snapshotter project on Github. Manifests are available here:[v6.0.x](https://github.com/kubernetes-csi/external-snapshotter/tree/v6.0.1/client/config/crd) #### Volume Snapshot Controller The CSI external-snapshotter sidecar is split into two controllers: - A common snapshot controller - A CSI external-snapshotter sidecar -The common snapshot controller must be installed only once in the cluster irrespective of the number of CSI drivers installed in the cluster. On OpenShift clusters 4.4 and later, the common snapshot-controller is pre-installed. In the clusters where it is not present, it can be installed using `kubectl` and the manifests are available here: [v5.0.x](https://github.com/kubernetes-csi/external-snapshotter/tree/v5.0.1/deploy/kubernetes/snapshot-controller) +The common snapshot controller must be installed only once in the cluster irrespective of the number of CSI drivers installed in the cluster. On OpenShift clusters 4.4 and later, the common snapshot-controller is pre-installed. In the clusters where it is not present, it can be installed using `kubectl` and the manifests are available here: [v6.0.x](https://github.com/kubernetes-csi/external-snapshotter/tree/v6.0.1/deploy/kubernetes/snapshot-controller) *NOTE:* - The manifests available on GitHub install the snapshotter image: @@ -102,7 +103,7 @@ kubectl -n kube-system kustomize deploy/kubernetes/snapshot-controller | kubectl ``` *NOTE:* -- It is recommended to use 5.0.x version of snapshotter/snapshot-controller. +- It is recommended to use 6.0.x version of snapshotter/snapshot-controller. ### (Optional) Replication feature Requirements @@ -121,7 +122,7 @@ CRDs should be configured during replication prepare stage with repctl as descri ## Install the Driver **Steps** -1. Run `git clone -b v2.3.0 https://github.com/dell/csi-powerscale.git` to clone the git repository. +1. Run `git clone -b v2.4.0 https://github.com/dell/csi-powerscale.git` to clone the git repository. 2. Ensure that you have created the namespace where you want to install the driver. You can run `kubectl create namespace isilon` to create a new one. The use of "isilon" as the namespace is just an example. You can choose any name for the namespace. 3. Collect information from the PowerScale Systems like IP address, IsiPath, username, and password. Make a note of the value for these parameters as they must be entered in the *secret.yaml*. 4. Copy *the helm/csi-isilon/values.yaml* into a new location with name say *my-isilon-settings.yaml*, to customize settings for installation. @@ -174,10 +175,13 @@ CRDs should be configured during replication prepare stage with repctl as descri | sidecarProxyImage | Image for csm-authorization-sidecar. | No | " " | | proxyHost | Hostname of the csm-authorization server. | No | Empty | | skipCertificateValidation | A boolean that enables/disables certificate validation of the csm-authorization server. | No | true | - | **podmon** | Podmon is an optional feature under development and tech preview. Enable this feature only after contact support for additional information. | - | - | - | enabled | A boolean that enable/disable podmon feature. | No | false | + | **podmon** | [Podmon](../../../../resiliency/deployment) is an optional feature to enable application pods to be resilient to node failure. | - | - | + | enabled | A boolean that enables/disables podmon feature. | No | false | | image | image for podmon. | No | " " | - + | **encryption** | [Encryption](../../../../secure/encryption/deployment) is an optional feature to apply encryption to CSI volumes. | - | - | + | enabled | A boolean that enables/disables Encryption feature. | No | false | + | image | Encryption driver image name. | No | "dellemc/csm-encryption:v0.1.0" | + *NOTE:* - ControllerCount parameter value must not exceed the number of nodes in the Kubernetes cluster. Otherwise, some of the controller pods remain in a "Pending" state till new nodes are available for scheduling. The installer exits with a WARNING on the same. @@ -267,7 +271,7 @@ The CSI driver for Dell PowerScale version 1.5 and later, `dell-csi-helm-install ### What happens to my existing storage classes? -*Upgrading from CSI PowerScale v2.2 driver*: +*Upgrading from CSI PowerScale v2.3 driver*: The storage classes created as part of the installation have an annotation - "helm.sh/resource-policy": keep set. This ensures that even after an uninstall or upgrade, the storage classes are not deleted. You can continue using these storage classes if you wish so. *NOTE*: @@ -287,11 +291,3 @@ Deleting a storage class has no impact on a running Pod with mounted PVCs. You c Starting CSI PowerScale v1.6, `dell-csi-helm-installer` will not create any Volume Snapshot Class during the driver installation. Sample volume snapshot class manifests are available at `samples/volumesnapshotclass/`. Use these sample manifests to create a volumesnapshotclass for creating volume snapshots; uncomment/ update the manifests as per the requirements. -### What happens to my existing Volume Snapshot Classes? - -*Upgrading from CSI PowerScale v2.2 driver*: -The existing volume snapshot class will be retained. - -*Upgrading from an older version of the driver*: -It is strongly recommended to upgrade the earlier versions of CSI PowerScale to 1.6 or higher before upgrading to 2.2. - diff --git a/content/docs/csidriver/installation/helm/powerflex.md b/content/docs/csidriver/installation/helm/powerflex.md index c021fb43e9..af80f767db 100644 --- a/content/docs/csidriver/installation/helm/powerflex.md +++ b/content/docs/csidriver/installation/helm/powerflex.md @@ -78,14 +78,14 @@ controller: ``` #### Volume Snapshot CRD's -The Kubernetes Volume Snapshot CRDs can be obtained and installed from the external-snapshotter project on Github. Manifests are available here: [v5.0.x](https://github.com/kubernetes-csi/external-snapshotter/tree/v5.0.1/client/config/crd) +The Kubernetes Volume Snapshot CRDs can be obtained and installed from the external-snapshotter project on Github. Manifests are available here: [v6.0.x](https://github.com/kubernetes-csi/external-snapshotter/tree/v6.0.1/client/config/crd) #### Volume Snapshot Controller The CSI external-snapshotter sidecar is split into two controllers: - A common snapshot controller - A CSI external-snapshotter sidecar -The common snapshot controller must be installed only once in the cluster irrespective of the number of CSI drivers installed in the cluster. On OpenShift clusters 4.4 and later, the common snapshot-controller is pre-installed. In the clusters where it is not present, it can be installed using `kubectl` and the manifests are available here: [v5.0.x](https://github.com/kubernetes-csi/external-snapshotter/tree/v5.0.1/deploy/kubernetes/snapshot-controller) +The common snapshot controller must be installed only once in the cluster irrespective of the number of CSI drivers installed in the cluster. On OpenShift clusters 4.4 and later, the common snapshot-controller is pre-installed. In the clusters where it is not present, it can be installed using `kubectl` and the manifests are available here: [v6.0.x](https://github.com/kubernetes-csi/external-snapshotter/tree/v6.0.1/deploy/kubernetes/snapshot-controller) *NOTE:* - The manifests available on GitHub install the snapshotter image: @@ -104,7 +104,7 @@ kubectl -n kube-system kustomize deploy/kubernetes/snapshot-controller | kubectl ``` *NOTE:* -- When using Kubernetes 1.21/1.22/1.23 it is recommended to use 5.0.x version of snapshotter/snapshot-controller. +- When using Kubernetes it is recommended to use 6.0.x version of snapshotter/snapshot-controller. - The CSI external-snapshotter sidecar is still installed along with the driver and does not involve any extra configuration. ## Install the Driver @@ -158,7 +158,7 @@ Use the below command to replace or update the secret: - "insecure" parameter has been changed to "skipCertificateValidation" as insecure is deprecated and will be removed from use in config.yaml or secret.yaml in a future release. Users can continue to use any one of "insecure" or "skipCertificateValidation" for now. The driver would return an error if both parameters are used. - Please note that log configuration parameters from v1.5 will no longer work in v2.0 and higher. Please refer to the [Dynamic Logging Configuration](../../../features/powerflex#dynamic-logging-configuration) section in Features for more information. - If the user is using complex K8s version like "v1.21.3-mirantis-1", use this kubeVersion check in helm/csi-unity/Chart.yaml file. - kubeVersion: ">= 1.21.0-0 < 1.24.0-0" + kubeVersion: ">= 1.21.0-0 < 1.25.0-0" 5. Default logging options are set during Helm install. To see possible configuration options, see the [Dynamic Logging Configuration](../../../features/powerflex#dynamic-logging-configuration) section in Features. @@ -208,8 +208,8 @@ Use the below command to replace or update the secret: | **vgsnapshotter** | This section allows the configuration of the volume group snapshotter(vgsnapshotter) pod. | - | - | | enabled | A boolean that enable/disable vg snapshotter feature. | No | false | | image | Image for vg snapshotter. | No | " " | -| **podmon** | Podmon is an optional feature under development and tech preview. Enable this feature only after contact support for additional information. | - | - | -| enabled | A boolean that enable/disable podmon feature. | No | false | +| **podmon** | [Podmon](../../../../resiliency/deployment) is an optional feature to enable application pods to be resilient to node failure. | - | - | +| enabled | A boolean that enables/disables podmon feature. | No | false | | image | image for podmon. | No | " " | | **authorization** | [Authorization](../../../../authorization/deployment) is an optional feature to apply credential shielding of the backend PowerFlex. | - | - | | enabled | A boolean that enables/disables authorization feature. | No | false | @@ -312,10 +312,3 @@ Deleting a storage class has no impact on a running Pod with mounted PVCs. You c Starting CSI PowerFlex v1.5, `dell-csi-helm-installer` will not create any Volume Snapshot Class during the driver installation. There is a sample Volume Snapshot Class manifest present in the _samples/_ folder. Please use this sample to create a new Volume Snapshot Class to create Volume Snapshots. -### What happens to my existing Volume Snapshot Classes? - -*Upgrading from CSI PowerFlex v2.2 driver*: -The existing volume snapshot class will be retained. - -*Upgrading from an older version of the driver*: -It is strongly recommended to upgrade the earlier versions of CSI PowerFlex to 1.5 or higher, before upgrading to 2.3. diff --git a/content/docs/csidriver/installation/helm/powermax.md b/content/docs/csidriver/installation/helm/powermax.md index d63d770012..f2298f059e 100644 --- a/content/docs/csidriver/installation/helm/powermax.md +++ b/content/docs/csidriver/installation/helm/powermax.md @@ -33,6 +33,7 @@ The following requirements must be met before installing CSI Driver for Dell Pow - Linux multipathing requirements - If using Snapshot feature, satisfy all Volume Snapshot requirements - If enabling CSM for Authorization, please refer to the [Authorization deployment steps](../../../../authorization/deployment/) first +- If using Powerpath , install the PowerPath for Linux requirements ### Install Helm 3 @@ -104,6 +105,16 @@ path_selector "round-robin 0" no_path_retry 10 ``` +### PowerPath for Linux requirements + +CSI Driver for Dell PowerMax supports PowerPath for Linux. Configure Linux PowerPath before installing the CSI Driver. + +Set up the PowerPath for Linux as follows: + +- All the nodes must have the PowerPath package installed . Download the PowerPath archive for the environment from [Dell Online Support](https://www.dell.com/support/home/en-in/product-support/product/powerpath-for-linux/drivers). +- `Untar` the PowerPath archive, Copy the RPM package into a temporary folder and Install PowerPath using `rpm -ivh DellEMCPower.LINUX--..x86_64.rpm` +- Start the PowerPath service using `systemctl start PowerPath` + ### (Optional) Volume Snapshot Requirements Applicable only if you decided to enable snapshot feature in `values.yaml` @@ -114,7 +125,7 @@ snapshot: ``` #### Volume Snapshot CRD's -The Kubernetes Volume Snapshot CRDs can be obtained and installed from the external-snapshotter project on Github. For installation, use [v5.0.x](https://github.com/kubernetes-csi/external-snapshotter/tree/v5.0.1/client/config/crd) +The Kubernetes Volume Snapshot CRDs can be obtained and installed from the external-snapshotter project on Github. For installation, use [v6.0.x](https://github.com/kubernetes-csi/external-snapshotter/tree/v6.0.1/client/config/crd) #### Volume Snapshot Controller The CSI external-snapshotter sidecar is split into two controllers to support Volume snapshots. @@ -122,7 +133,7 @@ The CSI external-snapshotter sidecar is split into two controllers to support Vo - A common snapshot controller - A CSI external-snapshotter sidecar -The common snapshot controller must be installed only once in the cluster, irrespective of the number of CSI drivers installed in the cluster. On OpenShift clusters 4.4 and later, the common snapshot-controller is pre-installed. In the clusters where it is not present, it can be installed using `kubectl` and the manifests are available here: [v5.0.x](https://github.com/kubernetes-csi/external-snapshotter/tree/v5.0.1/deploy/kubernetes/snapshot-controller) +The common snapshot controller must be installed only once in the cluster, irrespective of the number of CSI drivers installed in the cluster. On OpenShift clusters 4.4 and later, the common snapshot-controller is pre-installed. In the clusters where it is not present, it can be installed using `kubectl` and the manifests are available here: [v6.0.x](https://github.com/kubernetes-csi/external-snapshotter/tree/v6.0.1/deploy/kubernetes/snapshot-controller) *NOTE:* - The manifests available on GitHub install the snapshotter image: @@ -141,7 +152,7 @@ kubectl -n kube-system kustomize deploy/kubernetes/snapshot-controller | kubectl ``` *NOTE:* -- It is recommended to use 5.0.x version of snapshotter/snapshot-controller. +- It is recommended to use 6.0.x version of snapshotter/snapshot-controller. - The CSI external-snapshotter sidecar is still installed along with the driver and does not involve any extra configuration. ### (Optional) Replication feature Requirements @@ -162,7 +173,7 @@ CRDs should be configured during replication prepare stage with repctl as descri **Steps** -1. Run `git clone -b v2.3.0 https://github.com/dell/csi-powermax.git` to clone the git repository. This will include the Helm charts and dell-csi-helm-installer scripts. +1. Run `git clone -b v2.4.0 https://github.com/dell/csi-powermax.git` to clone the git repository. This will include the Helm charts and dell-csi-helm-installer scripts. 2. Ensure that you have created a namespace where you want to install the driver. You can run `kubectl create namespace powermax` to create a new one 3. Edit the `samples/secret/secret.yaml file, point to the correct namespace, and replace the values for the username and password parameters. These values can be obtained using base64 encoding as described in the following example: @@ -174,7 +185,8 @@ CRDs should be configured during replication prepare stage with repctl as descri 4. Create the secret by running `kubectl create -f samples/secret/secret.yaml`. 5. If you are going to install the new CSI PowerMax ReverseProxy service, create a TLS secret with the name - _csireverseproxy-tls-secret_ which holds an SSL certificate and the corresponding private key in the namespace where you are installing the driver. 6. Copy the default values.yaml file `cd helm && cp csi-powermax/values.yaml my-powermax-settings.yaml` -7. Edit the newly created file and provide values for the following parameters `vi my-powermax-settings.yaml` +7. Ensure the unisphere have 10.0 REST endpoint support by clicking on Unisphere -> Help (?) -> About in Unisphere for PowerMax GUI. +8. Edit the newly created file and provide values for the following parameters `vi my-powermax-settings.yaml` | Parameter | Description | Required | Default | |-----------|--------------|------------|----------| @@ -277,14 +289,6 @@ Upgrading from an older version of the driver: The storage classes will be delet Starting with CSI PowerMax v1.7.0, `dell-csi-helm-installer` will not create any Volume Snapshot Class during the driver installation. There is a sample Volume Snapshot Class manifest present in the _samples/volumesnapshotclass_ folder. Please use this sample to create a new Volume Snapshot Class to create Volume Snapshots. -### What happens to my existing Volume Snapshot Classes? - -*Upgrading from CSI PowerMax v2.1.0 driver*: -The existing volume snapshot class will be retained. - -*Upgrading from an older version of the driver*: -It is strongly recommended to upgrade the earlier versions of CSI PowerMax to 1.7.0 or higher, before upgrading to 2.3.0. - ## Sample values file The following sections have useful snippets from `values.yaml` file which provides more information on how to configure the CSI PowerMax driver along with CSI PowerMax ReverseProxy in various modes @@ -332,7 +336,7 @@ global: csireverseproxy: # Set enabled to true if you want to use proxy enabled: true - image: dellemc/csipowermax-reverseproxy:v1.4.0 + image: dellemc/csipowermax-reverseproxy:v2.3.0 tlsSecret: csirevproxy-tls-secret deployAsSidecar: true port: 2222 @@ -380,7 +384,7 @@ global: csireverseproxy: # Set enabled to true if you want to use proxy enabled: true - image: dellemc/csipowermax-reverseproxy:v1.4.0 + image: dellemc/csipowermax-reverseproxy:v2.3.0 tlsSecret: csirevproxy-tls-secret deployAsSidecar: true port: 2222 diff --git a/content/docs/csidriver/installation/helm/powerstore.md b/content/docs/csidriver/installation/helm/powerstore.md index 858b0385db..752dea5855 100644 --- a/content/docs/csidriver/installation/helm/powerstore.md +++ b/content/docs/csidriver/installation/helm/powerstore.md @@ -22,8 +22,8 @@ The node section of the Helm chart installs the following component in a _Daemon The following are requirements to be met before installing the CSI Driver for Dell PowerStore: - Install Kubernetes or OpenShift (see [supported versions](../../../../csidriver/#features-and-capabilities)) - Install Helm 3 -- If you plan to use either the Fibre Channel or iSCSI or NVMe/TCP protocol, refer to either _Fibre Channel requirements_ or _Set up the iSCSI Initiator_ or _Set up the NVMe/TCP Initiator_ sections below. You can use NFS volumes without FC or iSCSI or NVMe/TCP configuration. -> You can use either the Fibre Channel or iSCSI or NVMe/TCP protocol, but you do not need all the three. +- If you plan to use either the Fibre Channel or iSCSI or NVMe/TCP or NVMe/FC protocol, refer to either _Fibre Channel requirements_ or _Set up the iSCSI Initiator_ or _Set up the NVMe Initiator_ sections below. You can use NFS volumes without FC or iSCSI or NVMe/TCP or NVMe/FC configuration. +> You can use either the Fibre Channel or iSCSI or NVMe/TCP or NVMe/FC protocol, but you do not need all the four. > If you want to use preconfigured iSCSI/FC hosts be sure to check that they are not part of any host group - Linux native multipathing requirements @@ -102,7 +102,7 @@ snapshot: ``` #### Volume Snapshot CRD's -The Kubernetes Volume Snapshot CRDs can be obtained and installed from the external-snapshotter project on Github. Use [v5.0.x](https://github.com/kubernetes-csi/external-snapshotter/tree/v5.0.1/client/config/crd) for the installation. +The Kubernetes Volume Snapshot CRDs can be obtained and installed from the external-snapshotter project on Github. Use [v6.0.x](https://github.com/kubernetes-csi/external-snapshotter/tree/v6.0.1/client/config/crd) for the installation. #### Volume Snapshot Controller The CSI external-snapshotter sidecar is split into two controllers: @@ -110,15 +110,14 @@ The CSI external-snapshotter sidecar is split into two controllers: - A CSI external-snapshotter sidecar The common snapshot controller must be installed only once in the cluster irrespective of the number of CSI drivers installed in the cluster. On OpenShift clusters 4.4 and later, the common snapshot-controller is pre-installed. In the clusters where it is not present, it can be installed using `kubectl` and the manifests are available: -Use [v5.0.x](https://github.com/kubernetes-csi/external-snapshotter/tree/v5.0.1/deploy/kubernetes/snapshot-controller) for the installation. +Use [v6.0.x](https://github.com/kubernetes-csi/external-snapshotter/tree/v6.0.1/deploy/kubernetes/snapshot-controller) for the installation. *NOTE:* - The manifests available on GitHub install the snapshotter image: - [quay.io/k8scsi/csi-snapshotter:v4.0.x](https://quay.io/repository/k8scsi/csi-snapshotter?tag=v4.0.0&tab=tags) - The CSI external-snapshotter sidecar is still installed along with the driver and does not involve any extra configuration. -#### Installation example - +#### Installation example You can install CRDs and default snapshot controller by running these commands: ```bash git clone https://github.com/kubernetes-csi/external-snapshotter/ @@ -129,7 +128,7 @@ kubectl -n kube-system kustomize deploy/kubernetes/snapshot-controller | kubectl ``` *NOTE:* -- It is recommended to use 5.0.x version of snapshotter/snapshot-controller. +- It is recommended to use 6.0.x version of snapshotter/snapshot-controller. ### Volume Health Monitoring @@ -162,7 +161,6 @@ node: # Default value: None enabled: false ``` - ### (Optional) Replication feature Requirements Applicable only if you decided to enable the Replication feature in `values.yaml` @@ -180,11 +178,10 @@ CRDs should be configured during replication prepare stage with repctl as descri ## Install the Driver **Steps** -1. Run `git clone -b v2.3.0 https://github.com/dell/csi-powerstore.git` to clone the git repository. +1. Run `git clone -b v2.4.0 https://github.com/dell/csi-powerstore.git` to clone the git repository. 2. Ensure that you have created namespace where you want to install the driver. You can run `kubectl create namespace csi-powerstore` to create a new one. "csi-powerstore" is just an example. You can choose any name for the namespace. But make sure to align to the same namespace during the whole installation. -3. Check `helm/csi-powerstore/driver-image.yaml` and confirm the driver image points to new image. -4. Edit `samples/secret/secret.yaml` file and configure connection information for your PowerStore arrays changing following parameters: +3. Edit `samples/secret/secret.yaml` file and configure connection information for your PowerStore arrays changing following parameters: - *endpoint*: defines the full URL path to the PowerStore API. - *globalID*: specifies what storage cluster the driver should use - *username*, *password*: defines credentials for connecting to array. @@ -196,12 +193,12 @@ CRDs should be configured during replication prepare stage with repctl as descri NFSv4 ACls are supported for NFSv4 shares on NFSv4 enabled NAS servers only. POSIX ACLs are not supported and only POSIX mode bits are supported for NFSv3 shares. Add more blocks similar to above for each PowerStore array if necessary. -5. Create the secret by running ```kubectl create secret generic powerstore-config -n csi-powerstore --from-file=config=secret.yaml``` -6. Create storage classes using ones from `samples/storageclass` folder as an example and apply them to the Kubernetes cluster by running `kubectl create -f ` +4. Create the secret by running ```kubectl create secret generic powerstore-config -n csi-powerstore --from-file=config=secret.yaml``` +5. Create storage classes using ones from `samples/storageclass` folder as an example and apply them to the Kubernetes cluster by running `kubectl create -f ` > If you do not specify `arrayID` parameter in the storage class then the array that was specified as the default would be used for provisioning volumes. -7. Copy the default values.yaml file `cd dell-csi-helm-installer && cp ../helm/csi-powerstore/values.yaml ./my-powerstore-settings.yaml` -8. Edit the newly created values file and provide values for the following parameters `vi my-powerstore-settings.yaml`: +6. Copy the default values.yaml file `cd dell-csi-helm-installer && cp ../helm/csi-powerstore/values.yaml ./my-powerstore-settings.yaml` +7. Edit the newly created values file and provide values for the following parameters `vi my-powerstore-settings.yaml`: | Parameter | Description | Required | Default | |-----------|-------------|----------|---------| @@ -228,6 +225,9 @@ CRDs should be configured during replication prepare stage with repctl as descri | node.tolerations | Defines tolerations that would be applied to node daemonset | Yes | " " | | fsGroupPolicy | Defines which FS Group policy mode to be used, Supported modes `None, File and ReadWriteOnceWithFSType` | No | "ReadWriteOnceWithFSType" | | controller.vgsnapshot.enabled | To enable or disable the volume group snapshot feature | No | "true" | +| images.driverRepository | To use an image from custom repository | No | dockerhub | +| version | To use any driver version | No | Latest driver version | +| allowAutoRoundOffFilesystemSize | Allows the controller to round off filesystem to 3Gi which is the minimum supported value | No | false | 8. Install the driver using `csi-install.sh` bash script by running `./csi-install.sh --namespace csi-powerstore --values ./my-powerstore-settings.yaml` - After that the driver should be installed, you can check the condition of driver pods by running `kubectl get all -n csi-powerstore` @@ -257,7 +257,7 @@ There are samples storage class yaml files available under `samples/storageclass 1. Edit the sample storage class yaml file and update following parameters: - *arrayID*: specifies what storage cluster the driver should use, if not specified driver will use storage cluster specified as `default` in `samples/secret/secret.yaml` -- *FsType*: specifies what filesystem type driver should use, possible variants `ext3`, `ext4`, `xfs`, `nfs`, if not specified driver will use `ext4` by default. +- *csi.storage.k8s.io/fstype*: specifies what filesystem type driver should use, possible variants `ext3`, `ext4`, `xfs`, `nfs`, if not specified driver will use `ext4` by default. - *nfsAcls* (Optional): defines permissions - POSIX mode bits or NFSv4 ACLs, to be set on NFS target mount directory. - *allowedTopologies* (Optional): If you want you can also add topology constraints. ```yaml @@ -281,14 +281,6 @@ kubectl create -f Starting CSI PowerStore v1.4.0, `dell-csi-helm-installer` will not create any Volume Snapshot Class during the driver installation. There is a sample Volume Snapshot Class manifest present in the _samples/volumesnapshotclass_ folder. Please use this sample to create a new Volume Snapshot Class to create Volume Snapshots. -### What happens to my existing Volume Snapshot Classes? - -*Upgrading from CSI PowerStore v2.1.0 driver*: -The existing volume snapshot class will be retained. - -*Upgrading from an older version of the driver*: -It is strongly recommended to upgrade the earlier versions of CSI PowerStore to 1.4.0 or higher, before upgrading to 2.3.0. - ## Dynamically update the powerstore secrets Users can dynamically add delete array information from secret. Whenever an update happens the driver updates the “Host” information in an array. User can update secret using the following command: diff --git a/content/docs/csidriver/installation/helm/unity.md b/content/docs/csidriver/installation/helm/unity.md index 38000db82b..9f666f7ca5 100644 --- a/content/docs/csidriver/installation/helm/unity.md +++ b/content/docs/csidriver/installation/helm/unity.md @@ -88,7 +88,7 @@ Install CSI Driver for Unity XT using this procedure. *Before you begin* - * You must have the downloaded files, including the Helm chart from the source [git repository](https://github.com/dell/csi-unity) with the command ```git clone -b v2.3.0 https://github.com/dell/csi-unity.git```, as a pre-requisite for running this procedure. + * You must have the downloaded files, including the Helm chart from the source [git repository](https://github.com/dell/csi-unity) with the command ```git clone -b v2.4.0 https://github.com/dell/csi-unity.git```, as a pre-requisite for running this procedure. * In the top-level dell-csi-helm-installer directory, there should be two scripts, `csi-install.sh` and `csi-uninstall.sh`. * Ensure _unity_ namespace exists in Kubernetes cluster. Use the `kubectl create namespace unity` command to create the namespace if the namespace is not present. @@ -123,6 +123,7 @@ Procedure | podmon.enabled | service to monitor failing jobs and notify | false | - | | podmon.image| pod man image name | false | - | | tenantName | Tenant name added while adding host entry to the array | No | | + | fsGroupPolicy | Defines which FS Group policy mode to be used, Supported modes `None, File and ReadWriteOnceWithFSType` | No | "ReadWriteOnceWithFSType" | | **controller** | Allows configuration of the controller-specific parameters.| - | - | | controllerCount | Defines the number of csi-unity controller pods to deploy to the Kubernetes release| Yes | 2 | | volumeNamePrefix | Defines a string prefix for the names of PersistentVolumes created | Yes | "k8s" | @@ -169,6 +170,7 @@ Procedure allowRWOMultiPodAccess: false syncNodeInfoInterval: 5 maxUnityVolumesPerNode: 0 + fsGroupPolicy: ReadWriteOneFSType ``` 4. For certificate validation of Unisphere REST API calls refer [here](#certificate-validation-for-unisphere-rest-api-calls). Otherwise, create an empty secret with file `csi-unity/samples/secret/emptysecret.yaml` file by running the `kubectl create -f csi-unity/samples/secret/emptysecret.yaml` command. @@ -250,14 +252,14 @@ Procedure In order to use the Kubernetes Volume Snapshot feature, you must ensure the following components have been deployed on your Kubernetes cluster #### Volume Snapshot CRD's - The Kubernetes Volume Snapshot CRDs can be obtained and installed from the external-snapshotter project on Github. Use [v5.0.x](https://github.com/kubernetes-csi/external-snapshotter/tree/v5.0.1/client/config/crd) for the installation. + The Kubernetes Volume Snapshot CRDs can be obtained and installed from the external-snapshotter project on Github. Use [v6.0.x](https://github.com/kubernetes-csi/external-snapshotter/tree/v6.0.1/client/config/crd) for the installation. #### Volume Snapshot Controller The CSI external-snapshotter sidecar is split into two controllers: - A common snapshot controller - A CSI external-snapshotter sidecar - Use [v5.0.x](https://github.com/kubernetes-csi/external-snapshotter/tree/v5.0.1/deploy/kubernetes/snapshot-controller) for the installation. + Use [v6.0.x](https://github.com/kubernetes-csi/external-snapshotter/tree/v6.0.1/deploy/kubernetes/snapshot-controller) for the installation. #### Installation example @@ -271,7 +273,7 @@ Procedure ``` **Note**: - - It is recommended to use 5.0.x version of snapshotter/snapshot-controller. + - It is recommended to use 6.0.x version of snapshotter/snapshot-controller. - The CSI external-snapshotter sidecar is still installed along with the driver and does not involve any extra configuration. @@ -399,14 +401,6 @@ If the Unisphere certificate is self-signed or if you are using an embedded Unis A wide set of annotated storage class manifests have been provided in the [csi-unity/samples/volumesnapshotclass/](https://github.com/dell/csi-unity/tree/main/samples/volumesnapshotclass) folder. Use these samples to create new Volume Snapshot to provision storage. -### What happens to my existing Volume Snapshot Classes? - -*Upgrading from CSI Unity XT v2.1.0 driver*: -The existing volume snapshot class will be retained. - -*Upgrading from an older version of the driver*: -It is strongly recommended to upgrade the earlier versions of CSI Unity XT to v1.6.0 or higher, before upgrading to v2.3.0. - ## Storage Classes Storage Classes are an essential Kubernetes construct for Storage provisioning. To know more about Storage Classes, refer to https://kubernetes.io/docs/concepts/storage/storage-classes/ @@ -469,4 +463,4 @@ cd dell-csi-helm-installer ./csi-install.sh --namespace unity --values ./myvalues.yaml --upgrade ``` -Note: myvalues.yaml is a values.yaml file which user has used for driver installation. \ No newline at end of file +Note: myvalues.yaml is a values.yaml file which user has used for driver installation. diff --git a/content/docs/csidriver/installation/offline/_index.md b/content/docs/csidriver/installation/offline/_index.md index 127d35c937..2d10b99362 100644 --- a/content/docs/csidriver/installation/offline/_index.md +++ b/content/docs/csidriver/installation/offline/_index.md @@ -65,7 +65,7 @@ The resulting offline bundle file can be copied to another machine, if necessary For example, here is the output of a request to build an offline bundle for the Dell CSI Operator: ``` -git clone -b v1.8.0 https://github.com/dell/dell-csi-operator.git +git clone -b v1.9.0 https://github.com/dell/dell-csi-operator.git ``` ``` cd dell-csi-operator/scripts @@ -78,9 +78,9 @@ cd dell-csi-operator/scripts dellemc/csi-isilon:v2.0.0 dellemc/csi-isilon:v2.1.0 - dellemc/csipowermax-reverseproxy:v1.4.0 - dellemc/csi-powermax:v2.0.0 - dellemc/csi-powermax:v2.1.0 + dellemc/csipowermax-reverseproxy:v2.3.0 + dellemc/csi-powermax:v2.3.0 + dellemc/csi-powermax:v2.4.0 dellemc/csi-powerstore:v2.0.0 dellemc/csi-powerstore:v2.1.0 dellemc/csi-unity:v2.0.0 diff --git a/content/docs/csidriver/installation/operator/_index.md b/content/docs/csidriver/installation/operator/_index.md index 68113a0e90..65bd661ba1 100644 --- a/content/docs/csidriver/installation/operator/_index.md +++ b/content/docs/csidriver/installation/operator/_index.md @@ -11,14 +11,14 @@ The Dell CSI Operator is a Kubernetes Operator, which can be used to install and ## Prerequisites #### Volume Snapshot CRD's -The Kubernetes Volume Snapshot CRDs can be obtained and installed from the external-snapshotter project on Github. Manifests are available here:[v5.0.x](https://github.com/kubernetes-csi/external-snapshotter/tree/v5.0.1/client/config/crd) +The Kubernetes Volume Snapshot CRDs can be obtained and installed from the external-snapshotter project on Github. Manifests are available here:[v6.0.x](https://github.com/kubernetes-csi/external-snapshotter/tree/v6.0.1/client/config/crd) #### Volume Snapshot Controller The CSI external-snapshotter sidecar is split into two controllers: - A common snapshot controller - A CSI external-snapshotter sidecar -The common snapshot controller must be installed only once in the cluster irrespective of the number of CSI drivers installed in the cluster. On OpenShift clusters 4.4 and later, the common snapshot-controller is pre-installed. In the clusters where it is not present, it can be installed using `kubectl` and the manifests are available here: [v5.0.x](https://github.com/kubernetes-csi/external-snapshotter/tree/v5.0.1/deploy/kubernetes/snapshot-controller) +The common snapshot controller must be installed only once in the cluster irrespective of the number of CSI drivers installed in the cluster. On OpenShift clusters 4.4 and later, the common snapshot-controller is pre-installed. In the clusters where it is not present, it can be installed using `kubectl` and the manifests are available here: [v6.0.x](https://github.com/kubernetes-csi/external-snapshotter/tree/v6.0.1/deploy/kubernetes/snapshot-controller) *NOTE:* - The manifests available on GitHub install the snapshotter image: @@ -37,7 +37,7 @@ kubectl create -f deploy/kubernetes/snapshot-controller ``` *NOTE:* -- It is recommended to use 5.0.x version of snapshotter/snapshot-controller. +- It is recommended to use 6.0.x version of snapshotter/snapshot-controller. ## Installation @@ -50,21 +50,21 @@ If you have installed an old version of the `dell-csi-operator` which was availa #### Full list of CSI Drivers and versions supported by the Dell CSI Operator | CSI Driver | Version | ConfigVersion | Kubernetes Version | OpenShift Version | | ------------------ | --------- | -------------- | -------------------- | --------------------- | -| CSI PowerMax | 2.1.0 | v2.1.0 | 1.20, 1.21, 1.22 | 4.8, 4.8 EUS, 4.9 | | CSI PowerMax | 2.2.0 | v2.2.0 | 1.21, 1.22, 1.23 | 4.8, 4.8 EUS, 4.9 | | CSI PowerMax | 2.3.0 | v2.3.0 | 1.22, 1.23, 1.24 | 4.9, 4.10, 4.10 EUS | -| CSI PowerFlex | 2.1.0 | v2.1.0 | 1.20, 1.21, 1.22 | 4.8, 4.8 EUS, 4.9 | +| CSI PowerMax | 2.4.0 | v2.4.0 | 1.22, 1.23, 1.24 | 4.9, 4.10, 4.10 EUS | | CSI PowerFlex | 2.2.0 | v2.2.0 | 1.21, 1.22, 1.23 | 4.8, 4.8 EUS, 4.9 | | CSI PowerFlex | 2.3.0 | v2.3.0 | 1.22, 1.23, 1.24 | 4.9, 4.10, 4.10 EUS | -| CSI PowerScale | 2.1.0 | v2.1.0 | 1.20, 1.21, 1.22 | 4.8, 4.8 EUS, 4.9 | +| CSI PowerFlex | 2.4.0 | v2.4.0 | 1.22, 1.23, 1.24 | 4.9, 4.10, 4.10 EUS | | CSI PowerScale | 2.2.0 | v2.2.0 | 1.21, 1.22, 1.23 | 4.8, 4.8 EUS, 4.9 | | CSI PowerScale | 2.3.0 | v2.3.0 | 1.22, 1.23, 1.24 | 4.9, 4.10, 4.10 EUS | -| CSI Unity XT | 2.1.0 | v2.1.0 | 1.20, 1.21, 1.22 | 4.8, 4.8 EUS, 4.9 | +| CSI PowerScale | 2.4.0 | v2.4.0 | 1.22, 1.23, 1.24 | 4.9, 4.10, 4.10 EUS | | CSI Unity XT | 2.2.0 | v2.2.0 | 1.21, 1.22, 1.23 | 4.8, 4.8 EUS, 4.9 | | CSI Unity XT | 2.3.0 | v2.3.0 | 1.22, 1.23, 1.24 | 4.9, 4.10, 4.10 EUS | -| CSI PowerStore | 2.1.0 | v2.1.0 | 1.20, 1.21, 1.22 | 4.8, 4.8 EUS, 4.9 | +| CSI Unity XT | 2.4.0 | v2.4.0 | 1.22, 1.23, 1.24 | 4.9, 4.10, 4.10 EUS | | CSI PowerStore | 2.2.0 | v2.2.0 | 1.21, 1.22, 1.23 | 4.8, 4.8 EUS, 4.9 | | CSI PowerStore | 2.3.0 | v2.3.0 | 1.22, 1.23, 1.24 | 4.9, 4.10, 4.10 EUS | +| CSI PowerStore | 2.4.0 | v2.4.0 | 1.22, 1.23, 1.24 | 4.9, 4.10, 4.10 EUS |
@@ -76,7 +76,7 @@ The installation process involves the creation of a `Subscription` object either * _Automatic_ - If you want the Operator to be automatically installed or upgraded (once an upgrade becomes available) * _Manual_ - If you want a Cluster Administrator to manually review and approve the `InstallPlan` for installation/upgrades -**NOTE**: The recommended version of OLM for upstream Kubernetes is **`v0.18.2`**. +**NOTE**: The recommended version of OLM for upstream Kubernetes is **`v0.18.3`**. #### Pre-Requisite for installation with OLM Please run the following commands for creating the required `ConfigMap` before installing the `dell-csi-operator` using OLM. @@ -97,7 +97,7 @@ $ kubectl create configmap dell-csi-operator-config --from-file config.tar.gz -n #### Steps >**Skip step 1 for "offline bundle installation" and continue using the workspace created by untar of dell-csi-operator-bundle.tar.gz.** -1. Clone and checkout the required dell-csi-operator version using `git clone -b v1.8.0 https://github.com/dell/dell-csi-operator.git`. +1. Clone and checkout the required dell-csi-operator version using `git clone -b v1.9.0 https://github.com/dell/dell-csi-operator.git`. 2. cd dell-csi-operator 3. Run `bash scripts/install.sh` to install the operator. >NOTE: Dell CSI Operator version 1.4.0 and higher would install to the 'dell-csi-operator' namespace by default. @@ -274,12 +274,12 @@ The below notes explain some of the general items to take care of. 1. If you are trying to upgrade the CSI driver from an older version, make sure to modify the _configVersion_ field if required. ```yaml driver: - configVersion: v2.3.0 + configVersion: v2.4.0 ``` 2. Volume Health Monitoring feature is optional and by default this feature is disabled for drivers when installed via operator. To enable this feature, we will have to modify the below block while upgrading the driver.To get the volume health state add external-health-monitor sidecar in the sidecar section and `value`under controller set to true and the `value` under node set - to true as shown below: + to true as shown below:
i. Add controller and node section as below: ```yaml controller: @@ -298,26 +298,26 @@ The below notes explain some of the general items to take care of. - args: - --volume-name-prefix=csiunity - --default-fstype=ext4 - image: k8s.gcr.io/sig-storage/csi-provisioner:v3.1.0 + image: k8s.gcr.io/sig-storage/csi-provisioner:v3.2.0 imagePullPolicy: IfNotPresent name: provisioner - args: - --snapshot-name-prefix=csiunitysnap - image: k8s.gcr.io/sig-storage/csi-snapshotter:v5.0.1 + image: k8s.gcr.io/sig-storage/csi-snapshotter:v6.0.1 imagePullPolicy: IfNotPresent name: snapshotter - args: - --monitor-interval=60s - image: gcr.io/k8s-staging-sig-storage/csi-external-health-monitor-controller:v0.5.0 + image: gcr.io/k8s-staging-sig-storage/csi-external-health-monitor-controller:v0.6.0 imagePullPolicy: IfNotPresent name: external-health-monitor - - image: k8s.gcr.io/sig-storage/csi-attacher:v3.4.0 + - image: k8s.gcr.io/sig-storage/csi-attacher:v3.5.0 imagePullPolicy: IfNotPresent name: attacher - image: k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.5.1 imagePullPolicy: IfNotPresent name: registrar - - image: k8s.gcr.io/sig-storage/csi-resizer:v1.4.0 + - image: k8s.gcr.io/sig-storage/csi-resizer:v1.5.0 imagePullPolicy: IfNotPresent name: resizer ``` diff --git a/content/docs/csidriver/installation/operator/powermax.md b/content/docs/csidriver/installation/operator/powermax.md index 7c1e13c246..1290b00418 100644 --- a/content/docs/csidriver/installation/operator/powermax.md +++ b/content/docs/csidriver/installation/operator/powermax.md @@ -36,6 +36,35 @@ Set up the iSCSI initiators as follows: For more information about configuring iSCSI, see [Dell Host Connectivity guide](https://www.delltechnologies.com/asset/zh-tw/products/storage/technical-support/docu5128.pdf). +#### Linux multipathing requirements + +CSI Driver for Dell PowerMax supports Linux multipathing. Configure Linux multipathing before installing the CSI Driver. + +Set up Linux multipathing as follows: + +- All the nodes must have the _Device Mapper Multipathing_ package installed. + *NOTE:* When this package is installed it creates a multipath configuration file which is located at `/etc/multipath.conf`. Please ensure that this file always exists. +- Enable multipathing using `mpathconf --enable --with_multipathd y` +- Enable `user_friendly_names` and `find_multipaths` in the `multipath.conf` file. + +As a best practice, use these options to help the operating system and the mulitpathing software detect path changes efficiently: +```text +path_grouping_policy multibus +path_checker tur +features "1 queue_if_no_path" +path_selector "round-robin 0" +no_path_retry 10 +``` + +#### PowerPath for Linux requirements + +CSI Driver for Dell PowerMax supports PowerPath for Linux. Configure Linux PowerPath before installing the CSI Driver. + +Follow this procedure to set up PowerPath for Linux: + +- All the nodes must have the PowerPath package installed . Download the PowerPath archive for the environment from [Dell Online Support](https://www.dell.com/support/home/en-in/product-support/product/powerpath-for-linux/drivers). +- `Untar` the PowerPath archive, Copy the RPM package into a temporary folder and Install PowerPath using `rpm -ivh DellEMCPower.LINUX--..x86_64.rpm` +- Start the PowerPath service using `systemctl start PowerPath` #### Create secret for client-side TLS verification (Optional) Create a secret named powermax-certs in the namespace where the CSI PowerMax driver will be installed. This is an optional step and is only required if you are setting the env variable X_CSI_POWERMAX_SKIP_CERTIFICATE_VALIDATION to false. See the detailed documentation on how to create this secret [here](../../helm/powermax#certificate-validation-for-unisphere-rest-api-calls). @@ -179,7 +208,7 @@ metadata: namespace: test-powermax # <- Set the namespace to where you will install the CSI PowerMax driver spec: # Image for CSI PowerMax ReverseProxy - image: dellemc/csipowermax-reverseproxy:v2.1.0 # <- CSI PowerMax Reverse Proxy image + image: dellemc/csipowermax-reverseproxy:v2.3.0 # <- CSI PowerMax Reverse Proxy image imagePullPolicy: Always # TLS secret which contains SSL certificate and private key for the Reverse Proxy server tlsSecret: csirevproxy-tls-secret @@ -265,8 +294,8 @@ metadata: namespace: test-powermax spec: driver: - # Config version for CSI PowerMax v2.3.0 driver - configVersion: v2.3.0 + # Config version for CSI PowerMax v2.4.0 driver + configVersion: v2.4.0 # replica: Define the number of PowerMax controller nodes # to deploy to the Kubernetes release # Allowed values: n, where n > 0 @@ -275,8 +304,8 @@ spec: dnsPolicy: ClusterFirstWithHostNet forceUpdate: false common: - # Image for CSI PowerMax driver v2.3.0 - image: dellemc/csi-powermax:v2.3.0 + # Image for CSI PowerMax driver v2.4.0 + image: dellemc/csi-powermax:v2.4.0 # imagePullPolicy: Policy to determine if the image should be pulled prior to starting the container. # Allowed values: # Always: Always pull the image. diff --git a/content/docs/csidriver/installation/operator/powerstore.md b/content/docs/csidriver/installation/operator/powerstore.md index d2b74a2896..78c374f19c 100644 --- a/content/docs/csidriver/installation/operator/powerstore.md +++ b/content/docs/csidriver/installation/operator/powerstore.md @@ -138,7 +138,7 @@ data: | X_CSI_POWERSTORE_EXTERNAL_ACCESS | allows specifying additional entries for hostAccess of NFS volumes. Both single IP address and subnet are valid entries | No | " "| | X_CSI_NFS_ACLS | Defines permissions - POSIX mode bits or NFSv4 ACLs, to be set on NFS target mount directory. | No | "0777" | | ***Node parameters*** | -| X_CSI_POWERSTORE_ENABLE_CHAP | Set to true if you want to enable iSCSI CHAP feature | No | false | +| X_CSI_POWERSTORE_ENABLE_CHAP | Set to true if you want to enable iSCSI CHAP feature | No | false | 6. Execute the following command to create PowerStore custom resource:`kubectl create -f `. The above command will deploy the CSI-PowerStore driver. - After that the driver should be installed, you can check the condition of driver pods by running `kubectl get all -n ` diff --git a/content/docs/csidriver/installation/operator/unity.md b/content/docs/csidriver/installation/operator/unity.md index 89e8b9a699..d728919dde 100644 --- a/content/docs/csidriver/installation/operator/unity.md +++ b/content/docs/csidriver/installation/operator/unity.md @@ -97,12 +97,12 @@ metadata: namespace: test-unity spec: driver: - configVersion: v2.3.0 + configVersion: v2.4.0 replicas: 2 dnsPolicy: ClusterFirstWithHostNet forceUpdate: false common: - image: "dellemc/csi-unity:v2.3.0" + image: "dellemc/csi-unity:v2.4.0" imagePullPolicy: IfNotPresent sideCars: - name: provisioner @@ -210,7 +210,6 @@ kubectl edit configmap -n unity unity-config-params 3. Also, snapshotter and resizer sidecars are not optional to choose, it comes default with Driver installation. ## Volume Health Monitoring -This feature is introduced in CSI Driver for Unity XT version v2.1.0. ### Operator based installation diff --git a/content/docs/csidriver/installation/test/unity.md b/content/docs/csidriver/installation/test/unity.md index db32d53c98..d969ead6aa 100644 --- a/content/docs/csidriver/installation/test/unity.md +++ b/content/docs/csidriver/installation/test/unity.md @@ -28,9 +28,9 @@ You can find all the created resources in `test-unity` namespace. kubectl delete -f ./test/sample.yaml ``` -## Support for SLES 15 SP2 +## Support for SLES 15 -The CSI Driver for Dell Unity XT requires the following set of packages installed on all worker nodes that run on SLES 15 SP2. +The CSI Driver for Dell Unity XT requires these of packages installed on all worker nodes that run on SLES 15. - open-iscsi **open-iscsi is required in order to make use of iSCSI protocol for provisioning** - nfs-utils **nfs-utils is required in order to make use of NFS protocol for provisioning** diff --git a/content/docs/csidriver/release/operator.md b/content/docs/csidriver/release/operator.md index 9696d83067..924c939f57 100644 --- a/content/docs/csidriver/release/operator.md +++ b/content/docs/csidriver/release/operator.md @@ -3,14 +3,9 @@ title: Operator description: Release notes for Dell CSI Operator --- -## Release Notes - Dell CSI Operator 1.8.0 +## Release Notes - Dell CSI Operator 1.9.0 ->**Note:** There will be a delay in certification of Dell CSI Operator 1.8.0 and it will not be available for download from the Red Hat OpenShift certified catalog right away. The operator will still be available for download from the Red Hat OpenShift Community Catalog soon after the 1.8.0 release. - -### New Features/Changes - -- Added support for Kubernetes 1.24. -- Added support for OpenShift 4.10. +>**Note:** There will be a delay in certification of Dell CSI Operator 1.9.0 and it will not be available for download from the Red Hat OpenShift certified catalog right away. The operator will still be available for download from the Red Hat OpenShift Community Catalog soon after the 1.9.0 release. ### Fixed Issues There are no fixed issues in this release. diff --git a/content/docs/csidriver/release/powerflex.md b/content/docs/csidriver/release/powerflex.md index 6008b37b0f..9a3b0cd0fa 100644 --- a/content/docs/csidriver/release/powerflex.md +++ b/content/docs/csidriver/release/powerflex.md @@ -3,19 +3,15 @@ title: PowerFlex description: Release notes for PowerFlex CSI driver --- -## Release Notes - CSI PowerFlex v2.3.0 +## Release Notes - CSI PowerFlex v2.4.0 ### New Features/Changes -- Added support to configure fsGroupPolicy -- Removed beta volumesnapshotclass sample files. -- Added support for Kubernetes 1.24. -- Added support for OpenShift 4.10. -- Fixed handling of idempotent snapshots. +- [Added optional parameter protectionDomain to storageclass](https://github.com/dell/csm/issues/415) +- [Added InstallationID annotation for volume attributes.](https://github.com/dell/csm/issues/434) +- RHEL 8.6 support added -### Fixed Issues - -- Added label to driver node pod for Resiliency protection. -- Updated values file to use patched image of vg-snapshotter. +### Fixed Issues +- [Enhancements and fixes to volume group snapshotter](https://github.com/dell/csm/issues/371) ### Known Issues diff --git a/content/docs/csidriver/release/powermax.md b/content/docs/csidriver/release/powermax.md index 20163037c0..e88a744e2a 100644 --- a/content/docs/csidriver/release/powermax.md +++ b/content/docs/csidriver/release/powermax.md @@ -3,19 +3,17 @@ title: PowerMax description: Release notes for PowerMax CSI driver --- -## Release Notes - CSI PowerMax v2.3.0 +## Release Notes - CSI PowerMax v2.4.0 + +> Note: Starting from CSI v2.4.0, Only Unisphere 10.0 REST endpoints are supported. It is mandatory that Unisphere should be updated to 10.0. Please find the instructions [here.](https://dl.dell.com/content/manual34878027-dell-unisphere-for-powermax-10-0-0-installation-guide.pdf?language=en-us&ps=true) ### New Features/Changes -- Updated deprecated StorageClass parameter fsType with csi.storage.k8s.io/fstype. -- Added support for Standalone Helm Charts. -- Removed beta volumesnapshotclass sample files. -- Added mapping of PV/PVC to namespace. -- Added support to configure fsGroupPolicy. -- Added support to filter topology keys based on user inputs. -- Added support for SRDF Metro group sharing multiple namespaces. -- Added support for Kubernetes 1.24. -- Added support for OpenShift 4.10. -- Added support to convert replicated volume to non-replicated volume and vice versa for Sync and Async modes. +- [Online volume expansion for replicated volumes.](https://github.com/dell/csm/issues/336) +- [Added support for PowerMaxOS 10.](https://github.com/dell/csm/issues/389) +- [Removed 9.x Unisphere REST endpoints support.](https://github.com/dell/csm/issues/389) +- [Added 10.0 Unisphere REST endpoints support.](https://github.com/dell/csm/issues/389) +- [Automatic SRDF group creation for PowerMax arrays (PowerMaxOS 10 and above).](https://github.com/dell/csm/issues/411) +- [Added PowerPath support.](https://github.com/dell/csm/issues/436) ### Fixed Issues There are no fixed issues in this release. @@ -24,10 +22,7 @@ There are no fixed issues in this release. | Issue | Workaround | |-------|------------| -| Delete Volume fails with the error message: volume is part of masking view | This issue is due to limitations in Unisphere and occurs when Unisphere is overloaded. Currently, there is no workaround for this but it can be avoided by ensuring that Unisphere is not overloaded during such operations. The Unisphere team is assessing a fix for this in a future Unisphere release| -| Getting initiators list fails with context deadline error | The following error can occur during the driver installation if a large number of initiators are present on the array. There is no workaround for this but it can be avoided by deleting stale initiators on the array| | Unable to update Host: A problem occurred modifying the host resource | This issue occurs when the nodes do not have unique hostnames or when an IP address/FQDN with same sub-domains are used as hostnames. The workaround is to use unique hostnames or FQDN with unique sub-domains| -| GetSnapVolumeList fails with context deadline error | The following error can occur if a large number of snapshots are present on the array. There is no workaround for this but it can be avoided by deleting unused snapshots on the array| | When a node goes down, the block volumes attached to the node cannot be attached to another node | This is a known issue and has been reported at https://github.com/kubernetes-csi/external-attacher/issues/215. Workaround:
1. Force delete the pod running on the node that went down
2. Delete the volumeattachment to the node that went down.
Now the volume can be attached to the new node | | After expanding file system volume , new size is not getting reflected inside the container | This is a known issue and has been reported at https://github.com/dell/csm/issues/378 . Workaround : Remount the volumes
1. Edit the replica count as 0 in application StatefulSet
2. Change the replica count as 1 for same StatefulSet. | diff --git a/content/docs/csidriver/release/powerscale.md b/content/docs/csidriver/release/powerscale.md index 1a14c62bb6..01909ced74 100644 --- a/content/docs/csidriver/release/powerscale.md +++ b/content/docs/csidriver/release/powerscale.md @@ -3,15 +3,11 @@ title: PowerScale description: Release notes for PowerScale CSI driver --- -## Release Notes - CSI Driver for PowerScale v2.3.0 +## Release Notes - CSI Driver for PowerScale v2.4.0 ### New Features/Changes -- Removed beta volumesnapshotclass sample files. -- Added support for Kubernetes 1.24. -- Added support to increase volume path limit. -- Added support for OpenShift 4.10. -- Added support for CSM Resiliency sidecar via Helm. +- [Added support to add client only to root clients when RO volume is created from snapshot and RootClientEnabled is set to true.](https://github.com/dell/csm/issues/362) ### Fixed Issues @@ -23,7 +19,8 @@ There are no fixed issues in this release. | If the length of the nodeID exceeds 128 characters, the driver fails to update the CSINode object and installation fails. This is due to a limitation set by CSI spec which doesn't allow nodeID to be greater than 128 characters. | The CSI PowerScale driver uses the hostname for building the nodeID which is set in the CSINode resource object, hence we recommend not having very long hostnames in order to avoid this issue. This current limitation of 128 characters is likely to be relaxed in future Kubernetes versions as per this issue in the community: https://github.com/kubernetes-sigs/gcp-compute-persistent-disk-csi-driver/issues/581

**Note:** In kubernetes 1.22 this limit has been relaxed to 192 characters. | | If some older NFS exports /terminated worker nodes still in NFS export client list, CSI driver tries to add a new worker node it fails (For RWX volume). | User need to manually clean the export client list from old entries to make successful addition of new worker nodes. | | Delete namespace that has PVCs and pods created with the driver. The External health monitor sidecar crashes as a result of this operation. | Deleting the namespace deletes the PVCs first and then removes the pods in the namespace. This brings a condition where pods exist without their PVCs and causes the external-health-monitor sidecar to crash. This is a known issue and has been reported at https://github.com/kubernetes-csi/external-health-monitor/issues/100 | -| fsGroupPolicy may not work as expected without root privileges for NFS only
https://github.com/kubernetes/examples/issues/260 | To get the desired behavior set "RootClientEnabled" = "true" in the storage class parameter | +| fsGroupPolicy may not work as expected without root privileges for NFS only
https://github.com/kubernetes/examples/issues/260 | To get the desired behavior set "RootClientEnabled" = "true" in the storage class parameter | +| Driver logs shows "VendorVersion=2.3.0+dirty" | Update the driver to csi-powerscale 2.4.0 | ### Note: diff --git a/content/docs/csidriver/release/powerstore.md b/content/docs/csidriver/release/powerstore.md index f0bbb59e8a..b11c3b8d86 100644 --- a/content/docs/csidriver/release/powerstore.md +++ b/content/docs/csidriver/release/powerstore.md @@ -3,16 +3,13 @@ title: PowerStore description: Release notes for PowerStore CSI driver --- -## Release Notes - CSI PowerStore v2.3.0 +## Release Notes - CSI PowerStore v2.4.0 ### New Features/Changes -- Support Volume Group Snapshots. -- Removed beta volumesnapshotclass sample files. -- Support Configurable Volume Attributes. -- Added support for Kubernetes 1.24. -- Added support for OpenShift 4.10. -- Added support for NVMe/FC protocol. +- [Updated deprecated StorageClass parameter fsType with csi.storage.k8s.io/fstype](https://github.com/dell/csm/issues/188) +- [Added support for iSCSI in TKG Qualification](https://github.com/dell/csm/issues/363) +- [Added support for Stand alone Helm Chart](https://github.com/dell/csm/issues/355) ### Fixed Issues diff --git a/content/docs/csidriver/release/unity.md b/content/docs/csidriver/release/unity.md index 701d0778d4..9a0668e3c3 100644 --- a/content/docs/csidriver/release/unity.md +++ b/content/docs/csidriver/release/unity.md @@ -3,16 +3,11 @@ title: Unity XT description: Release notes for Unity XT CSI driver --- -## Release Notes - CSI Unity XT v2.3.0 +## Release Notes - CSI Unity XT v2.4.0 ### New Features/Changes -- Removed beta volumesnapshotclass sample files. -- Added support for Kubernetes 1.24. -- Added support for OpenShift 4.10. - -### Fixed Issues -CSM Resiliency: Occasional failure unmounting Unity volume for raw block devices via iSCSI. +- [Added support to configure fsGroupPolicy](https://github.com/dell/csm/issues/361) ### Known Issues diff --git a/content/docs/csidriver/troubleshooting/powerflex.md b/content/docs/csidriver/troubleshooting/powerflex.md index 373605cc8e..f53deb66cd 100644 --- a/content/docs/csidriver/troubleshooting/powerflex.md +++ b/content/docs/csidriver/troubleshooting/powerflex.md @@ -22,6 +22,7 @@ description: Troubleshooting PowerFlex Driver | Volume metrics are missing | Enable [Volume Health Monitoring](../../features/powerflex#volume-health-monitoring) | | When a node goes down, the block volumes attached to the node cannot be attached to another node | This is a known issue and has been reported at https://github.com/kubernetes-csi/external-attacher/issues/215. Workaround:
1. Force delete the pod running on the node that went down
2. Delete the volumeattachment to the node that went down.
Now the volume can be attached to the new node. | | CSI-PowerFlex volumes cannot mount; are being recognized as multipath devices | CSI-PowerFlex does not support multipath; to fix:
1. Remove any multipath mapping involving a powerflex volume with `multipath -f `
2. Blacklist CSI-PowerFlex volumes in multipath config file | + | When attempting a driver upgrade, you see: ```spec.fsGroupPolicy: Invalid value: "xxx": field is immutable``` | You cannot upgrade between drivers with different fsGroupPolicies. See [upgrade documentation](../../upgradation/drivers/powerflex) for more details | >*Note*: `vxflexos-controller-*` is the controller pod that acquires leader lease diff --git a/content/docs/csidriver/troubleshooting/powermax.md b/content/docs/csidriver/troubleshooting/powermax.md index 76cc3d4b23..ba6db41fbf 100644 --- a/content/docs/csidriver/troubleshooting/powermax.md +++ b/content/docs/csidriver/troubleshooting/powermax.md @@ -11,3 +11,4 @@ description: Troubleshooting PowerMax Driver | `kubectl logs powermax-controller- –n driver` logs show that the driver failed to connect to the U4P because it could not verify the certificates | Check the powermax-certs secret and ensure it is not empty or it has the valid certificates| |Driver install or upgrade fails because of an incompatible Kubernetes version, even though the version seems to be within the range of compatibility. For example: Error: UPGRADE FAILED: chart requires kubeVersion: >= 1.22.0 < 1.25.0 which is incompatible with Kubernetes V1.22.11-mirantis-1 | If you are using an extended Kubernetes version, please see the [helm Chart](https://github.com/dell/csi-powermax/blob/main/helm/csi-powermax/Chart.yaml) and use the alternate kubeVersion check that is provided in the comments. Please note that this is not meant to be used to enable the use of pre-release alpha and beta versions, which is not supported.| | When a node goes down, the block volumes attached to the node cannot be attached to another node | 1. Force delete the pod running on the node that went down
2. Delete the volumeattachment to the node that went down.
Now the volume can be attached to the new node. | +| When attempting a driver upgrade, you see: ```spec.fsGroupPolicy: Invalid value: "xxx": field is immutable``` | You cannot upgrade between drivers with different fsGroupPolicies. See [upgrade documentation](../../upgradation/drivers/powermax) for more details | diff --git a/content/docs/csidriver/troubleshooting/powerscale.md b/content/docs/csidriver/troubleshooting/powerscale.md index e3f233a76c..8c35ed482a 100644 --- a/content/docs/csidriver/troubleshooting/powerscale.md +++ b/content/docs/csidriver/troubleshooting/powerscale.md @@ -18,3 +18,4 @@ Here are some installation failures that might be encountered and how to mitigat | The `kubectl logs isilon-controller-0 -n isilon -c driver` logs shows the driver **Authentication failed. Trying to re-authenticate** when using Session-based authentication | The issue has been resolved from OneFS 9.3 onwards, for OneFS versions prior to 9.3 for session-based authentication either smart connect can be created against a single node of Isilon or CSI Driver can be installed/pointed to a particular node of the Isilon else basic authentication can be used by setting isiAuthType in `values.yaml` to 0 | | When an attempt is made to create more than one ReadOnly PVC from the same volume snapshot, the second and subsequent requests result in PVCs in state `Pending`, with a warning `another RO volume from this snapshot is already present`. This is because the driver allows only one RO volume from a specific snapshot at any point in time. This is to allow faster creation(within a few seconds) of a RO PVC from a volume snapshot irrespective of the size of the volume snapshot. | Wait for the deletion of the first RO PVC created from the same volume snapshot. | | While attaching a ReadOnly PVC from a volume snapshot to a pod, the mount operation will fail with error `mounting ... failed, reason given by server: No such file or directory`, if RO volume's access zone(non System access zone) on Isilon is configured with a dedicated service IP(which is same as `AzServiceIP` storage class parameter). This operation results in accessing the snapshot base directory(`/ifs`) and results in overstepping the RO volume's access zone's base directory, which the OneFS doesn't allow. | Provide a service ip that belongs to RO volume's access zone which set the highest level `/ifs` as its zone base directory. | +|Driver install or upgrade fails because of an incompatible Kubernetes version, even though the version seems to be within the range of compatibility. For example: Error: UPGRADE FAILED: chart requires kubeVersion: >= 1.22.0 < 1.25.0 which is incompatible with Kubernetes V1.22.11-mirantis-1 | If you are using an extended Kubernetes version, please see the [helm Chart](https://github.com/dell/csi-powerscale/blob/main/helm/csi-isilon/Chart.yaml) and use the alternate kubeVersion check that is provided in the comments. Please note that this is not meant to be used to enable the use of pre-release alpha and beta versions, which is not supported.| diff --git a/content/docs/csidriver/troubleshooting/powerstore.md b/content/docs/csidriver/troubleshooting/powerstore.md index 62c1622262..7ba746fb2a 100644 --- a/content/docs/csidriver/troubleshooting/powerstore.md +++ b/content/docs/csidriver/troubleshooting/powerstore.md @@ -11,4 +11,5 @@ description: Troubleshooting PowerStore Driver | If PVC is not getting created and getting the following error in PVC description:
```failed to provision volume with StorageClass "powerstore-iscsi": rpc error: code = Internal desc = : Unknown error:```| Check if you've created a secret with correct credentials | | If the NVMeFC pod is not getting created and the host looses the ssh connection, causing the driver pods to go to error state | remove the nvme_tcp module from the host incase of NVMeFC connection | | When a node goes down, the block volumes attached to the node cannot be attached to another node | 1. Force delete the pod running on the node that went down
2. Delete the volumeattachment to the node that went down.
Now the volume can be attached to the new node. | -| If the pod creation for NVMe takes time when the connections between the host and the array are more than 2 and considerable volumes are mounted on the host | Reduce the number of connections between the host and the array to 2. | \ No newline at end of file +| If the pod creation for NVMe takes time when the connections between the host and the array are more than 2 and considerable volumes are mounted on the host | Reduce the number of connections between the host and the array to 2. | +|Driver install or upgrade fails because of an incompatible Kubernetes version, even though the version seems to be within the range of compatibility. For example: Error: UPGRADE FAILED: chart requires kubeVersion: >= 1.22.0 < 1.25.0 which is incompatible with Kubernetes V1.22.11-mirantis-1 | If you are using an extended Kubernetes version, please see the [helm Chart](https://github.com/dell/csi-powerstore/blob/main/helm/csi-powerstore/Chart.yaml) and use the alternate kubeVersion check that is provided in the comments. Please note that this is not meant to be used to enable the use of pre-release alpha and beta versions, which is not supported.| \ No newline at end of file diff --git a/content/docs/csidriver/troubleshooting/unity.md b/content/docs/csidriver/troubleshooting/unity.md index 9905215390..cd398664b5 100644 --- a/content/docs/csidriver/troubleshooting/unity.md +++ b/content/docs/csidriver/troubleshooting/unity.md @@ -14,3 +14,4 @@ description: Troubleshooting Unity XT Driver | PVC creation fails on a fresh cluster with **iSCSI** and **NFS** protocols alone enabled with error **failed to provision volume with StorageClass "unity-iscsi": error generating accessibility requirements: no available topology found**. | This is because iSCSI initiator login takes longer than the node pod startup time. This can be overcome by bouncing the node pods in the cluster using the below command the driver pods with **kubectl get pods -n unity --no-headers=true \| awk '/unity-/{print $1}'\| xargs kubectl delete -n unity pod** | | Driver install or upgrade fails because of an incompatible Kubernetes version, even though the version seems to be within the range of compatibility. For example: `Error: UPGRADE FAILED: chart requires kubeVersion: >= 1.21.0 < 1.25.0 which is incompatible with Kubernetes V1.21.11-mirantis-1` | If you are using an extended Kubernetes version, please see the helm Chart at `helm/csi-unity/Chart.yaml` and use the alternate `kubeVersion` check that is provided in the comments. *Please note* that this is not meant to be used to enable the use of pre-release alpha and beta versions, which is not supported. | | When a node goes down, the block volumes attached to the node cannot be attached to another node | 1. Force delete the pod running on the node that went down
2. Delete the VolumeAttachment to the node that went down.
Now the volume can be attached to the new node. | +| Volume attachments are not removed after deleting the pods | If you are using Kubernetes version < 1.24, assign the volume name prefix such that the total length of volume name created in array should be more than 68 bytes. From Kubernetes version >= 1.24, this issue is taken care.
Please refer the kubernetes issue https://github.com/kubernetes/kubernetes/issues/97230 which has detailed explanation. | diff --git a/content/docs/csidriver/upgradation/drivers/isilon.md b/content/docs/csidriver/upgradation/drivers/isilon.md index 75fca2acda..5fcdd65f99 100644 --- a/content/docs/csidriver/upgradation/drivers/isilon.md +++ b/content/docs/csidriver/upgradation/drivers/isilon.md @@ -8,12 +8,12 @@ Description: Upgrade PowerScale CSI driver --- You can upgrade the CSI Driver for Dell PowerScale using Helm or Dell CSI Operator. -## Upgrade Driver from version 2.2.0 to 2.3.0 using Helm +## Upgrade Driver from version 2.3.0 to 2.4.0 using Helm **Note:** While upgrading the driver via helm, controllerCount variable in myvalues.yaml can be at most one less than the number of worker nodes. **Steps** -1. Clone the repository using `git clone -b v2.3.0 https://github.com/dell/csi-powerscale.git`, copy the helm/csi-isilon/values.yaml into a new location with a custom name say _my-isilon-settings.yaml_, to customize settings for installation. Edit _my-isilon-settings.yaml_ as per the requirements. +1. Clone the repository using `git clone -b v2.4.0 https://github.com/dell/csi-powerscale.git`, copy the helm/csi-isilon/values.yaml into a new location with a custom name say _my-isilon-settings.yaml_, to customize settings for installation. Edit _my-isilon-settings.yaml_ as per the requirements. 2. Change to directory dell-csi-helm-installer to install the Dell PowerScale `cd dell-csi-helm-installer` 3. Upgrade the CSI Driver for Dell PowerScale using following command: diff --git a/content/docs/csidriver/upgradation/drivers/offline.md b/content/docs/csidriver/upgradation/drivers/offline.md new file mode 100644 index 0000000000..752de08e0f --- /dev/null +++ b/content/docs/csidriver/upgradation/drivers/offline.md @@ -0,0 +1,9 @@ +--- +title: Offline Upgrade of Dell CSI Storage Providers +linktitle: Offline Upgrade +description: Offline Upgrade of Dell CSI Storage Providers +--- + +1. To perform offline upgrade of the driver, please create an offline bundle as mentioned [here](./../../../installation/offline#building-an-offline-bundle). +2. Once the bundle is created, please unpack the bundle by following the steps mentioned [here](./../../../installation/offline#unpacking-the-offline-bundle-and-preparing-for-installation). +3. Please use the driver specific upgrade steps to upgrade. \ No newline at end of file diff --git a/content/docs/csidriver/upgradation/drivers/operator.md b/content/docs/csidriver/upgradation/drivers/operator.md index eab8bedd28..51298cee83 100644 --- a/content/docs/csidriver/upgradation/drivers/operator.md +++ b/content/docs/csidriver/upgradation/drivers/operator.md @@ -13,7 +13,7 @@ Dell CSI Operator can be upgraded based on the supported platforms in one of the ### Using Installation Script -1. Clone and checkout the required dell-csi-operator version using `git clone -b v1.8.0 https://github.com/dell/dell-csi-operator.git`. +1. Clone and checkout the required dell-csi-operator version using `git clone -b v1.9.0 https://github.com/dell/dell-csi-operator.git`. 2. cd dell-csi-operator 3. Execute `bash scripts/install.sh --upgrade` . This command will install the latest version of the operator. >Note: Dell CSI Operator version 1.4.0 and higher would install to the 'dell-csi-operator' namespace by default. @@ -25,5 +25,5 @@ The `Update approval` (**`InstallPlan`** in OLM terms) strategy plays a role whi - If the **`Update approval`** is set to `Automatic`, OpenShift automatically detects whenever the latest version of dell-csi-operator is available in the **`Operator hub`**, and upgrades it to the latest available version. - If the upgrade policy is set to `Manual`, OpenShift notifies of an available upgrade. This notification can be viewed by the user in the **`Installed Operators`** section of the OpenShift console. Clicking on the hyperlink to `Approve` the installation would trigger the dell-csi-operator upgrade process. -**NOTE**: The recommended version of OLM for Upstream Kubernetes is **`v0.18.3`** when upgrading operator to `v1.5.0`. +**NOTE**: The recommended version of OLM for Upstream Kubernetes is **`v0.18.3`** when upgrading operator to `v1.9.0`. diff --git a/content/docs/csidriver/upgradation/drivers/powerflex.md b/content/docs/csidriver/upgradation/drivers/powerflex.md index 5c181f183e..75fbe21a34 100644 --- a/content/docs/csidriver/upgradation/drivers/powerflex.md +++ b/content/docs/csidriver/upgradation/drivers/powerflex.md @@ -23,6 +23,20 @@ You can upgrade the CSI Driver for Dell PowerFlex using Helm or Dell CSI Operato - To update any installation parameter after the driver has been installed, change the `myvalues.yaml` file and run the install script with the option _\-\-upgrade_, for example: `./csi-install.sh --namespace vxflexos --values ./myvalues.yaml --upgrade`. - The logging configuration from v1.5 will not work in v2.1, since the log configuration parameters are now set in the values.yaml file located at helm/csi-vxflexos/values.yaml. Please set the logging configuration parameters in the values.yaml file. +- You cannot upgrade between drivers with different fsGroupPolicies. To check the current driver's fsGroupPolicy, use this command: +``` kubectl describe csidriver csi-vxflexos.dellemc.com``` +and check the "Spec" section: +``` +... +Spec: + Attach Required: true + Fs Group Policy: ReadWriteOnceWithFSType + Pod Info On Mount: true + Requires Republish: false + Storage Capacity: false +... +``` + ## Upgrade using Dell CSI Operator: **Note:** Upgrading the Operator does not upgrade the CSI Driver. diff --git a/content/docs/csidriver/upgradation/drivers/powermax.md b/content/docs/csidriver/upgradation/drivers/powermax.md index 98e1fd3059..de810ef264 100644 --- a/content/docs/csidriver/upgradation/drivers/powermax.md +++ b/content/docs/csidriver/upgradation/drivers/powermax.md @@ -10,16 +10,37 @@ Description: Upgrade PowerMax CSI driver You can upgrade CSI Driver for Dell PowerMax using Helm or Dell CSI Operator. -## Update Driver from v2.2 to v2.3 using Helm +**Note:** CSI Driver for Powermax v2.4.0 requires 10.0 REST endpoint support of Unisphere. +### Updating the CSI Driver to use 10.0 Unisphere + +1. Upgrade the Unisphere to have 10.0 endpoint support.Please find the instructions [here.](https://dl.dell.com/content/manual34878027-dell-unisphere-for-powermax-10-0-0-installation-guide.pdf?language=en-us&ps=true) +2. Update the `my-powermax-settings.yaml` to have endpoint with 10.0 support. + +## Update Driver from v2.3 to v2.4 using Helm **Steps** -1. Run `git clone -b v2.3.0 https://github.com/dell/csi-powermax.git` to clone the git repository and get the v2.3 driver. +1. Run `git clone -b v2.4.0 https://github.com/dell/csi-powermax.git` to clone the git repository and get the v2.4 driver. 2. Update the values file as needed. 2. Run the `csi-install` script with the option _\-\-upgrade_ by running: `cd ../dell-csi-helm-installer && ./csi-install.sh --namespace powermax --values ./my-powermax-settings.yaml --upgrade`. *NOTE:* - If you are upgrading from a driver version that was installed using Helm v2, ensure that you install Helm3 before installing the driver. - To update any installation parameter after the driver has been installed, change the `my-powermax-settings.yaml` file and run the install script with the option _\-\-upgrade_, for example: `./csi-install.sh --namespace powermax --values ./my-powermax-settings.yaml –upgrade`. +- You cannot upgrade between drivers with different fsGroupPolicies. To check the current driver's fsGroupPolicy, use this command: +``` kubectl describe csidriver csi-powermax``` +and check the "Spec" section: + +``` +... +Spec: + Attach Required: true + Fs Group Policy: ReadWriteOnceWithFSType + Pod Info On Mount: false + Requires Republish: false + Storage Capacity: false +... + +``` ## Upgrade using Dell CSI Operator: **Note:** Upgrading the Operator does not upgrade the CSI Driver. diff --git a/content/docs/csidriver/upgradation/drivers/powerstore.md b/content/docs/csidriver/upgradation/drivers/powerstore.md index 089fa38c68..757a31c4b2 100644 --- a/content/docs/csidriver/upgradation/drivers/powerstore.md +++ b/content/docs/csidriver/upgradation/drivers/powerstore.md @@ -9,12 +9,12 @@ Description: Upgrade PowerStore CSI driver You can upgrade the CSI Driver for Dell PowerStore using Helm or Dell CSI Operator. -## Update Driver from v2.2 to v2.3 using Helm +## Update Driver from v2.3 to v2.4 using Helm Note: While upgrading the driver via helm, controllerCount variable in myvalues.yaml can be at most one less than the number of worker nodes. **Steps** -1. Run `git clone -b v2.3.0 https://github.com/dell/csi-powerstore.git` to clone the git repository and get the driver. +1. Run `git clone -b v2.4.0 https://github.com/dell/csi-powerstore.git` to clone the git repository and get the driver. 2. Edit `helm/config.yaml` file and configure connection information for your PowerStore arrays changing the following parameters: - *endpoint*: defines the full URL path to the PowerStore API. - *globalID*: specifies what storage cluster the driver should use @@ -28,7 +28,7 @@ Note: While upgrading the driver via helm, controllerCount variable in myvalues. Add more blocks similar to above for each PowerStore array if necessary. 3. (optional) create new storage classes using ones from `samples/storageclass` folder as an example and apply them to the Kubernetes cluster by running `kubectl create -f ` - >Storage classes created by v1.4/v2.0/v2.1 driver will not be deleted, v2.2 driver will use default array to manage volumes provisioned with old storage classes. Thus, if you still have volumes provisioned by v1.4/v2.0/v2.1 in your cluster then be sure to include the same array you have used for the v1.4/v2.0/v2.1 driver and make it default in the `config.yaml` file. + >Storage classes created by v1.4/v2.0/v2.1/v2.2/v2.3 driver will not be deleted, v2.4 driver will use default array to manage volumes provisioned with old storage classes. Thus, if you still have volumes provisioned by v1.4/v2.0/v2.1/v2.2/v2.3 in your cluster then be sure to include the same array you have used for the v1.4/v2.0/v2.1/v2.2/v2.3 driver and make it default in the `config.yaml` file. 4. Create the secret by running ```kubectl create secret generic powerstore-config -n csi-powerstore --from-file=config=secret.yaml``` 5. Copy the default values.yaml file `cp ./helm/csi-powerstore/values.yaml ./dell-csi-helm-installer/my-powerstore-settings.yaml` and update parameters as per the requirement. 6. Run the `csi-install` script with the option _\-\-upgrade_ by running: `./dell-csi-helm-installer/csi-install.sh --namespace csi-powerstore --values ./my-powerstore-settings.yaml --upgrade`. diff --git a/content/docs/csidriver/upgradation/drivers/unity.md b/content/docs/csidriver/upgradation/drivers/unity.md index 26b4e4d47d..a1bfe7a3cc 100644 --- a/content/docs/csidriver/upgradation/drivers/unity.md +++ b/content/docs/csidriver/upgradation/drivers/unity.md @@ -20,9 +20,9 @@ You can upgrade the CSI Driver for Dell Unity XT using Helm or Dell CSI Operator Preparing myvalues.yaml is the same as explained in the install section. -To upgrade the driver from csi-unity v2.2.0 to csi-unity v2.3.0 +To upgrade the driver from csi-unity v2.3.0 to csi-unity v2.4.0 -1. Get the latest csi-unity v2.3.0 code from Github using using `git clone -b v2.3.0 https://github.com/dell/csi-unity.git`. +1. Get the latest csi-unity v2.4.0 code from Github using using `git clone -b v2.4.0 https://github.com/dell/csi-unity.git`. 2. Copy the helm/csi-unity/values.yaml to the new location csi-unity/dell-csi-helm-installer and rename it to myvalues.yaml. Customize settings for installation by editing myvalues.yaml as needed. 3. Navigate to csi-unity/dell-csi-hem-installer folder and execute this command: `./csi-install.sh --namespace unity --values ./myvalues.yaml --upgrade` diff --git a/content/docs/deployment/_index.md b/content/docs/deployment/_index.md index 23b93beb33..8e6ecf2e58 100644 --- a/content/docs/deployment/_index.md +++ b/content/docs/deployment/_index.md @@ -1,13 +1,76 @@ --- title: "Deployment" linkTitle: "Deployment" +no_list: true description: Deployment of CSM for Replication weight: 1 --- +The Container Storage Modules along with the required CSI Drivers can each be deployed using CSM operator. +>Note: Currently CSM operator is in tech preview and is not supported in production environments. + +{{< cardpane >}} + {{< card header="[**CSM Operator**](csmoperator/)" + footer="Supports driver [PowerScale](csmoperator/drivers/powerscale/), modules [Authorization](csmoperator/modules/authorization/) [Replication](csmoperator/modules/replication/)">}} + Dell CSM Operator is a Kubernetes Operator, which can be used to install and manage the CSI Drivers and CSM Modules provided by Dell for various storage platforms. This operator is available as a community operator for upstream Kubernetes and can be deployed using OperatorHub.io. The operator can be installed using OLM (Operator Lifecycle Manager) or manually. +[...More on installation instructions](csmoperator/) + {{< /card >}} +{{< /cardpane >}} The Container Storage Modules and the required CSI Drivers can each be deployed following the links below: -- [Dell CSI Drivers Installation](../csidriver/installation) -- [Dell Container Storage Module for Observability](../observability/deployment) -- [Dell Container Storage Module for Authorization](../authorization/deployment) -- [Dell Container Storage Module for Resiliency](../resiliency/deployment) -- [Dell Container Storage Module for Replication](../replication/deployment) \ No newline at end of file + + +{{< cardpane >}} + {{< card header="[Dell CSI Drivers Installation via Helm](../csidriver/installation/helm)" + footer="Installs [PowerStore](../csidriver/installation/helm/powerstore/) [PowerMax](../csidriver/installation/helm/powermax/) [PowerScale](../csidriver/installation/helm/isilon/) [PowerFlex](../csidriver/installation/helm/powerflex/) [Unity](../csidriver/installation/helm/unity/)">}} + Dell CSI Helm installer installs the CSI Driver components using the provided Helm charts. + [...More on installation instructions](../csidriver/installation/helm) + {{< /card >}} + {{< card header="[Dell CSI Drivers Installation via offline installer](../csidriver/installation/offline)" + footer="[Offline installation for all drivers](../csidriver/installation/offline)">}} + Both Helm and Dell CSI opetor supports offline installation of the Dell CSI Storage Providers via `csi-offline-bundle.sh` script by creating a usable package. + [...More on installation instructions](../csidriver/installation/offline) + {{< /card >}} +{{< /cardpane >}} +{{< cardpane >}} + {{< card header="[Dell CSI Drivers Installation via operator](../csidriver/installation/operator)" + footer="Installs [PowerStore](../csidriver/installation/operator/powerstore/) [PowerMax](../csidriver/installation/operator/powermax/) [PowerScale](../csidriver/installation/operator/isilon/) [PowerFlex](../csidriver/installation/operator/powerflex/) [Unity](../csidriver/installation/operator/unity/)">}} + Dell CSI Operator is a Kubernetes Operator, which can be used to install and manage the CSI Drivers provided by Dell for various storage platforms. This operator is available as a community operator for upstream Kubernetes and can be deployed using OperatorHub.io. It is also available as a certified operator for OpenShift clusters and can be deployed using the OpenShift Container Platform. Both these methods of installation use OLM (Operator Lifecycle Manager). The operator can also be deployed manually. + [...More on installation instructions](../csidriver/installation/operator) + {{< /card >}} +{{< /cardpane >}} +{{< cardpane >}} + {{< card header="[Dell Container Storage Module for Observability](../observability/deployment)" + footer="Installs Observability Module">}} + CSM for Observability can be deployed either via Helm or CSM for Observability Installer or CSM for Observability Offline Installer + [...More on installation instructions](../observability/deployment) + {{< /card >}} + {{< card header="[Dell Container Storage Module for Authorization](../authorization/deployment)" + footer="Installs Authorization Module">}} + CSM Authorization can be installed by using the provided Helm v3 charts on Kubernetes platforms. + [...More on installation instructions](../authorization/deployment) + {{< /card >}} +{{< /cardpane >}} +{{< cardpane >}} + {{< card header="[Dell Container Storage Module for Resiliency](../resiliency/deployment)" + footer="Installs Resiliency Module">}} + CSI drivers that support Helm chart installation allow CSM for Resiliency to be _optionally_ installed by variables in the chart. It can be updated via _podmon_ block specified in the _values.yaml_ + [...More on installation instructions](../resiliency/deployment) + {{< /card >}} + {{< card header="[Dell Container Storage Module for Replication](../replication/deployment)" + footer="Installs Replication Module">}} + Replication module can be installed by installing repctl,Container Storage Modules (CSM) for Replication Controller,CSI driver after enabling replication. + [...More on installation instructions](../replication/deployment) + {{< /card >}} +{{< /cardpane >}} +{{< cardpane >}} + {{< card header="[Dell Container Storage Module for Application Mobility](../applicationmobility/deployment)" + footer="Installs Application Mobility Module">}} + Application mobility module can be installed via helm charts. This is a tech preview release and it requires a license for installation. + [...More on installation instructions](../applicationmobility/deployment) + {{< /card >}} + {{< card header="[Dell Container Storage Module for Encryption](../secure/encryption/deployment)" + footer="Installs Encryption Module">}} + Encryption can be optionally installed via the PowerScale CSI driver Helm chart. + [...More on installation instructions](../secure/encryption//deployment) + {{< /card >}} +{{< /cardpane >}} diff --git a/content/docs/deployment/csminstaller/_index.md b/content/docs/deployment/csminstaller/_index.md deleted file mode 100644 index 4527ddfd9f..0000000000 --- a/content/docs/deployment/csminstaller/_index.md +++ /dev/null @@ -1,193 +0,0 @@ ---- -title: "CSM Installer" -linkTitle: "CSM Installer" -description: Container Storage Modules Installer -weight: 1 ---- - -{{% pageinfo color="primary" %}} -The CSM Installer is currently deprecated and will no longer be supported as of CSM v1.4.0 -{{% /pageinfo %}} - ->>**Note: The CSM Installer only supports installation of CSM 1.0 Modules and CSI Drivers in environments that do not have any existing deployments of CSM or CSI Drivers. The CSM Installer does not support the upgrade of existing CSM or CSI Driver deployments.** - -The CSM (Container Storage Modules) Installer simplifies the deployment and management of Dell Container Storage Modules and CSI Drivers to provide persistent storage for your containerized workloads. - -## CSM Installer Supported Modules and Dell CSI Drivers - -| Modules/Drivers | CSM 1.0 | -| - | :-: | -| Authorization | 1.0 | -| Observability | 1.0 | -| Replication | 1.0 | -| Resiliency | 1.0 | -| CSI Driver for PowerScale | v2.0 | -| CSI Driver for Unity XT | v2.0 | -| CSI Driver for PowerStore | v2.0 | -| CSI Driver for PowerFlex | v2.0 | -| CSI Driver for PowerMax | v2.0 | - -The CSM Installer must first be deployed in a Kubernetes environment using Helm. After which, the CSM Installer can be used through the following interfaces: -- [CSM CLI](./csmcli) -- [REST API](./csmapi) - -## How to Deploy the Container Storage Modules Installer - -1. Add the `dell` helm repository: - -``` -helm repo add dell https://dell.github.io/helm-charts -``` - -**If securing the API service and database, following steps 2 to 4 to generate the certificates, or skip to step 5 to deploy without certificates** - -2. Generate self-signed certificates using the following commands: - -``` -mkdir api-certs - -openssl req \ - -newkey rsa:4096 -nodes -sha256 -keyout api-certs/ca.key \ - -x509 -days 365 -out api-certs/ca.crt -subj '/' - -openssl req \ - -newkey rsa:4096 -nodes -sha256 -keyout api-certs/cert.key \ - -out api-certs/cert.csr -subj '/' - -openssl x509 -req -days 365 -in api-certs/cert.csr -CA api-certs/ca.crt \ - -CAkey api-certs/ca.key -CAcreateserial -out api-certs/cert.crt -``` - -3. If required, download the `cockroach` binary used to generate certificates for the cockroach-db: -``` -curl https://binaries.cockroachdb.com/cockroach-v21.1.8.linux-amd64.tgz | tar -xz && sudo cp -i cockroach-v21.1.8.linux-amd64/cockroach /usr/local/bin/ -``` - -4. Generate the certificates required for the cockroach-db service: -``` -mkdir db-certs - -cockroach cert create-ca --certs-dir=db-certs --ca-key=db-certs/ca.key - -cockroach cert create-node cockroachdb-0.cockroachdb.csm-installer.svc.cluster.local cockroachdb-public cockroachdb-0.cockroachdb --certs-dir=db-certs/ --ca-key=db-certs/ca.key - -``` - In case multiple instances of cockroachdb are required add all nodes names while creating nodes on the certificates -``` -cockroach cert create-node cockroachdb-0.cockroachdb.csm-installer.svc.cluster.local cockroachdb-1.cockroachdb.csm-installer.svc.cluster.local cockroachdb-2.cockroachdb.csm-installer.svc.cluster.local cockroachdb-public cockroachdb-0.cockroachdb cockroachdb-1.cockroachdb cockroachdb-2.cockroachdb --certs-dir=db-certs/ --ca-key=db-certs/ca.key -``` - -``` -cockroach cert create-client root --certs-dir=db-certs/ --ca-key=db-certs/ca.key - -cockroach cert list --certs-dir=db-certs/ -``` - -5. Create a values.yaml file that contains JWT, Cipher key, and Admin username and password of CSM Installer that are required by the installer during helm installation. See the [Configuration](#configuration) section for other values that can be set during helm installation. - -> __Note__: `jwtKey` will be used as a shared secret in HMAC algorithm for generating jwt token, `cipherKey` will be used as a symmetric key in AES cipher for encryption of storage system credentials. Those parameters are arbitrary, and you can set them to whatever you like. Just ensure that `cipherKey` is exactly 32 characters long. - -``` -# string of any length -jwtKey: - -# string of exactly 32 characters -cipherKey: "" - -# Admin username of CSM Installer -adminUserName: - -# Admin password of CSM Installer -adminPassword: -``` - -6. Follow step `a` if certificates are being used or step `b` if certificates are not being used: - -a) Install the helm chart, specifying the certificates generated in the previous steps: -``` -helm install -n csm-installer --create-namespace \ - --set-file serviceCertificate=api-certs/cert.crt \ - --set-file servicePrivateKey=api-certs/cert.key \ - --set-file databaseCertificate=db-certs/node.crt \ - --set-file databasePrivateKey=db-certs/node.key \ - --set-file dbClientCertificate=db-certs/client.root.crt \ - --set-file dbClientPrivateKey=db-certs/client.root.key \ - --set-file caCrt=db-certs/ca.crt \ - -f values.yaml \ - csm-installer dell/csm-installer -``` -b) If not deploying with certificates, execute the following command: -``` -helm install -n csm-installer --create-namespace \ - --set-string scheme=http \ - --set-string dbSSLEnabled="false" \ - -f values.yaml \ - csm-installer dell/csm-installer -``` - -> __Note__: In an OpenShift environment, the cockroachdb StatefulSet will run privileged pods so that it can mount the Persistent Volume used for storage. Follow the documentation for your OpenShift version to enable privileged pods. - -### Configuration - -| Parameter | Description | Default | -|----------------------------------|-----------------------------------------------|---------------------------------------------------------| -| `csmInstallerCount` | Number of replicas for the CSM Installer Deployment | `1`| -| `dbInstanceCount` | Number of replicas for the CSM Database StatefulSet | `2` | -| `imagePullPolicy` | Image pull policy for the CSM Installer images | `Always` | -| `host` | Host or IP that will be used to bind to the CSM Installer API service | `0.0.0.0` | -| `port` | Port that will be used to bind to the CSM Installer API service | `8080` | -| `scheme` | Scheme used for the CSM Installer API service. Valid values are `https` and `http` | `https` | -| `jwtKey` | Key used to sign the JWT token | | -| `cipherKey` | Key used to encrypt/decrypt user and storage system credentials. Must be 32 characters in length. | | -| `logLevel` | Log level used for the CSM Installer. Valid values are `DEBUG`, `INFO`, `WARN`, `ERROR`, and `FATAL` | `INFO` | -| `dbHost` | Host name of the Cockroach DB instance | `cockroachdb-public` | -| `dbPort` | Port number to access the Cockroach DB instance | `26257` | -| `dbSSLEnabled` | Enable SSL for the Cockroach DB connectiong | `true` | -| `installerImage` | Location of the CSM Installer Docker Image | `dellemc/dell-csm-installer:v1.0.0` | -| `dataCollectorImage`| Location of the CSM Data Collector Docker Image | `dellemc/csm-data-collector:v1.0.0` | -| `adminUserName` | Username to authenticate with the CSM Installer | | -| `adminPassword` | Password to authenticate with the CSM Installer | | -| `dbVolumeDirectory` | Directory on the worker node to use for the Persistent Volume | `/var/lib/cockroachdb` | -| `api_server_ip` | If using Swagger, set to public IP or host of the CSM Installer API service | `localhost` | - -## How to Upgrade the Container Storage Modules Installer - -When a new version of the CSM Installer helm chart is available, the following steps can be used to upgrade to the latest version. - ->Note: Upgrading the CSM Installer does not upgrade the Dell CSI Drivers or modules that were previously deployed with the installer. The CSM Installer does not support upgrading of the Dell CSI Drivers or modules. The Dell CSI Drivers and modules must be deleted and re-deployed using the latest CSM Installer in order to get the most recent version of the Dell CSI Driver and modules. - -1. Update the helm repository. -``` -helm repo update -``` - -2. Follow step `a` if certificates were used during the initial installation of the helm chart or step `b` if certificates were not used: - -a) Upgrade the helm chart, specifying the certificates used during initial installation: -``` -helm upgrade -n csm-installer \ - --set-file serviceCertificate=api-certs/cert.crt \ - --set-file servicePrivateKey=api-certs/cert.key \ - --set-file databaseCertificate=db-certs/node.crt \ - --set-file databasePrivateKey=db-certs/node.key \ - --set-file dbClientCertificate=db-certs/client.root.crt \ - --set-file dbClientPrivateKey=db-certs/client.root.key \ - --set-file caCrt=db-certs/ca.crt \ - -f values.yaml \ - csm-installer dell/csm-installer -``` - -b) If not deploying with certificates, execute the following command: -``` -helm upgrade -n csm-installer \ - --set-string scheme=http \ - --set-string dbSSLEnabled="false" \ - -f values.yaml \ - csm-installer dell/csm-installer -``` -## How to Uninstall the Container Storage Modules Installer - -1. Delete the Helm chart -``` -helm delete -n csm-installer csm-installer -``` diff --git a/content/docs/deployment/csminstaller/csmapi.md b/content/docs/deployment/csminstaller/csmapi.md deleted file mode 100644 index 812f36b835..0000000000 --- a/content/docs/deployment/csminstaller/csmapi.md +++ /dev/null @@ -1,8 +0,0 @@ ---- -title: "CSM REST API" -type: swagger -weight: 1 -description: Reference for the CSM REST API ---- - -{{< swaggerui src="../swagger.yaml" >}} \ No newline at end of file diff --git a/content/docs/deployment/csminstaller/csmcli.md b/content/docs/deployment/csminstaller/csmcli.md deleted file mode 100644 index 3711351969..0000000000 --- a/content/docs/deployment/csminstaller/csmcli.md +++ /dev/null @@ -1,269 +0,0 @@ ---- -title : CSM CLI -linktitle: CSM CLI -weight: 2 -description: > - Dell Container Storage Modules (CSM) Command Line Interface(CLI) Deployment and Management ---- -`csm` is a command-line client for installation of Dell Container Storage Modules and CSI Drivers for Kubernetes clusters. - -## Pre-requisites - -1. [Deploy the Container Storage Modules Installer](../../deployment) -2. Download/Install the `csm` binary from Github: https://github.com/dell/csm. Alternatively, you can build the binary by: - - cloning the `csm` repository - - changing into `csm/cmd/csm` directory - - running `make build` -3. create a `cli_env.sh` file that contains the correct values for the below variables. And export the variables by running `source ./cli_env.sh` - -```console -# Change this to CSM API Server IP -export API_SERVER_IP="127.0.0.1" - -# Change this to CSM API Server Port -export API_SERVER_PORT="31313" - -# CSM API Server protocol - allowed values are https & http -export SCHEME="https" - -# Path to store JWT -export AUTH_CONFIG_PATH="/home/user/installer-token/" -``` - -## Usage - -```console -~$ ./csm -h -csm is command line tool for csm application - -Usage: - csm [flags] - csm [command] - -Available Commands: - add add cluster, configuration or storage - approve-task approve task for application - authenticate authenticate user - change change - subcommand is password - create create application - delete delete storage, cluster, configuration or application - get get storage, cluster, application, configuration, supported driver, module, storage type - help Help about any command - reject-task reject task for an application - update update storage, configuration or cluster - -Flags: - -h, --help help for csm-cli - -Use "csm [command] --help" for more information about a command. -``` - -### Authenticate the User - -To begin with, you need to authenticate the user who will be managing the CSM Installer and its components. - -```console -./csm authenticate --username= --password= -``` -Or more securely, run the above command without `--password` to be prompted for one - -```console -./csm authenticate --username= -Enter user's password: - -``` - -### Change Password - -To change password follow below command - -```console -./csm change password --username= -``` - -### View Supported Platforms - -You can now view the supported DellCSI Drivers - -```console -./csm get supported-drivers -``` - -You can also view the supported Modules - -```console -./csm get supported-modules -``` - -And also view the supported Storage Array Types - -```console -./csm get supported-storage-arrays -``` - -### Add a Cluster - -You can now add a cluster by providing cluster detail name and Kubeconfig path - -```console -./csm add cluster --clustername --configfilepath -``` - -### Upload Configuration Files - -You can now add a configuration file that can be used for creating application by providing filename and path - -```console -./csm add configuration --filename --filepath -``` - -### Add a Storage System - -You can now add storage endpoints, array type and its unique id - -```console -./csm add storage --endpoint --storage-type --unique-id --username -``` - -The optional `--meta-data` flag can be used to provide additional meta-data for the storage system that is used when creating Secrets for the CSI Driver. These fields include: - - isDefault: Set to true if this storage system is used as default for multi-array configuration - - skipCertificateValidation: Set to true to skip certificate validation - - mdmId: Comma separated list of MDM IPs for PowerFlex - - nasName: NAS Name for PowerStore - - blockProtocol: Block Protocol for PowerStore - - port: Port for PowerScale - - portGroups: Comma separated list of port group names for PowerMax - -### Create an Application - -You may now create an application depending on the specific use case. Below are the common use cases: - -
- CSI Driver - -```console -./csm create application --clustername \ - --driver-type powerflex: --name \ - --storage-arrays -``` -
- -
- CSI Driver with CSM Authorization - -CSM Authorization requires a `token.yaml` issued by storage Admin from the CSM Authorization Server, a certificate file, and the of the authorization server. The `token.yaml` and `cert` should be added by following the steps in [adding configuration file](#upload-configuration-files). CSM Authorization does not yet support all CSI Drivers/platforms(See [supported platforms documentation](../../authorization/#supported-platforms) or [supported platforms via CLI](#view-supported-platforms))). -Finally, run the command below: - -```console -./csm create application --clustername \ - --driver-type powerflex: --name \ - --storage-arrays \ - --module-type authorization: \ - --module-configuration "karaviAuthorizationProxy.proxyAuthzToken.filename=,karaviAuthorizationProxy.rootCertificate.filename=,karaviAuthorizationProxy.proxyHost=" - -``` -
- -
- CSM Observability(Standalone) - -CSM Observability depends on driver config secret(s) corresponding to the metric(s) you want to enable. Please see [CSM Observability](../../observability/metrics) for all Supported Metrics. For the sake of demonstration, assuming we want to enable [CSM Metrics for PowerFlex](../../observability/metrics/powerflex), the PowerFlex secret yaml should be added by following the steps in [adding configuration file](#upload-configuration-files). -Once this is done, run the command below: - -```console -./csm create application --clustername \ - --name \ - --module-type observability: \ - --module-configuration "karaviMetricsPowerflex.driverConfig.filename=,karaviMetricsPowerflex.enabled=true" -``` -
- -
- CSM Observability(Standalone) with CSM Authorization - -See the individual steps for configuaration file pre-requisites for CSM Observability (Standalone) with CSM Authorization - -```console -./csm create application --clustername \ - --name \ - --module-type "observability:,authorization:" \ - --module-configuration "karaviMetricsPowerflex.driverConfig.filename=,karaviMetricsPowerflex.enabled=true,karaviAuthorizationProxy.proxyAuthzToken.filename=,karaviAuthorizationProxy.rootCertificate.filename=,karaviAuthorizationProxy.proxyHost=" -``` -
- -
- CSI Driver for Dell PowerMax with reverse proxy module - - To deploy CSI Driver for Dell PowerMax with reverse proxy module, first upload reverse proxy tls crt and tls key via [adding configuration file](#upload-configuration-files). Then, use the below command to create application: - -```console -./csm create application --clustername \ - --driver-type powermax: --name \ - --storage-arrays \ - --module-type reverse-proxy: \ - --module-configuration reverseProxy.tlsSecretKeyFile=,reverseProxy.tlsSecretCertFile= -``` -
- -
- CSI Driver with replication module - - To deploy CSI driver with replication module, first add a target cluster through [adding cluster](#add-a-cluster). Then, use the below command(this command is an example to deploy CSI Driver for Dell PowerStore with replication module) to create application:: - -```console -./csm create application --clustername \ - --driver-type powerstore: --name \ - --storage-arrays \ - --module-configuration target_cluster= \ - --module-type replication: -``` -
- - -
- CSI Driver with other module(s) not covered above - - Assuming you want to deploy a driver with `module A` and `module B`. If they have specific configurations of `A.image="docker:v1"`,`A.filename=hello`, and `B.namespace=world`. - -```console -./csm create application --clustername \ - --driver-type powerflex: --name \ - --storage-arrays \ - --module-type "module A:,module B:" \ - --module-configuration "A.image=docker:v1,A.filename=hello,B.namespace=world" -``` -
-
- -> __Note__: - - `--driver-type` and `--module-type` flags in create application command MUST match the values from the [supported CSM platforms](#view-supported-platforms) - - Replication module supports only using a pair of clusters at a time (source and a target/or single cluster) from CSM installer, However `repctl` can be used if needed to add multiple pairs of target clusters. Using replication module with other modules during application creation is not yet supported. - -### Approve application/task - -You may now approve the task so that you can continue to work with the application - -```console -./csm approve-task --applicationname -``` - -### Reject application/task - -You may want to reject a task or application to discontinue the ongoing process - -```console -./csm reject-task --applicationname -``` - -### Delete application/task - -If you want to delete an application - -```console -./csm delete application --name -``` - -> __Note__: When deleting an application, the namespace and Secrets are not deleted. These resources need to be deleted manually. See more in [Troubleshooting](../troubleshooting#after-deleting-an-application-why-cant-i-re-create-the-same-application). - -> __Note__: All commands and associated syntax can be displayed with -h or --help - diff --git a/content/docs/deployment/csminstaller/swagger.yaml b/content/docs/deployment/csminstaller/swagger.yaml deleted file mode 100644 index 15a9b8b227..0000000000 --- a/content/docs/deployment/csminstaller/swagger.yaml +++ /dev/null @@ -1,1395 +0,0 @@ -basePath: /api/v1 -definitions: - ApplicationCreateRequest: - properties: - cluster_id: - type: string - driver_configuration: - items: - type: string - type: array - driver_type_id: - type: string - module_configuration: - items: - type: string - type: array - module_types: - items: - type: string - type: array - name: - type: string - storage_arrays: - items: - type: string - type: array - required: - - cluster_id - - driver_type_id - - name - type: object - ApplicationResponse: - properties: - application_output: - type: string - cluster_id: - type: string - driver_configuration: - items: - type: string - type: array - driver_type_id: - type: string - id: - type: string - module_configuration: - items: - type: string - type: array - module_types: - items: - type: string - type: array - name: - type: string - storage_arrays: - items: - type: string - type: array - type: object - ClusterResponse: - properties: - cluster_id: - type: string - cluster_name: - type: string - nodes: - description: The nodes - type: string - type: object - ConfigFileResponse: - properties: - id: - type: string - name: - type: string - type: object - DriverResponse: - properties: - id: - type: string - storage_array_type_id: - type: string - version: - type: string - type: object - ErrorMessage: - properties: - arguments: - items: - type: string - type: array - code: - description: HTTPStatusEnum Possible HTTP status values of completed or failed - jobs - enum: - - 200 - - 201 - - 202 - - 204 - - 400 - - 401 - - 403 - - 404 - - 422 - - 429 - - 500 - - 503 - type: integer - message: - description: Message string. - type: string - message_l10n: - description: Localized message - type: object - severity: - description: |- - SeverityEnum - The severity of the condition - * INFO - Information that may be of use in understanding the failure. It is not a problem to fix. - * WARNING - A condition that isn't a failure, but may be unexpected or a contributing factor. It may be necessary to fix the condition to successfully retry the request. - * ERROR - An actual failure condition through which the request could not continue. - * CRITICAL - A failure with significant impact to the system. Normally failed commands roll back and are just ERROR, but this is possible - enum: - - INFO - - WARNING - - ERROR - - CRITICAL - type: string - type: object - ErrorResponse: - properties: - http_status_code: - description: HTTPStatusEnum Possible HTTP status values of completed or failed - jobs - enum: - - 200 - - 201 - - 202 - - 204 - - 400 - - 401 - - 403 - - 404 - - 422 - - 429 - - 500 - - 503 - type: integer - messages: - description: |- - A list of messages describing the failure encountered by this request. At least one will - be of Error severity because Info and Warning conditions do not cause the request to fail - items: - $ref: '#/definitions/ErrorMessage' - type: array - type: object - ModuleResponse: - properties: - id: - type: string - name: - type: string - standalone: - type: boolean - version: - type: string - type: object - StorageArrayCreateRequest: - properties: - management_endpoint: - type: string - meta_data: - items: - type: string - type: array - password: - type: string - storage_array_type: - type: string - unique_id: - type: string - username: - type: string - required: - - management_endpoint - - password - - storage_array_type - - unique_id - - username - type: object - StorageArrayResponse: - properties: - id: - type: string - management_endpoint: - type: string - meta_data: - items: - type: string - type: array - storage_array_type_id: - type: string - unique_id: - type: string - username: - type: string - type: object - StorageArrayTypeResponse: - properties: - id: - type: string - name: - type: string - type: object - StorageArrayUpdateRequest: - properties: - management_endpoint: - type: string - meta_data: - items: - type: string - type: array - password: - type: string - storage_array_type: - type: string - unique_id: - type: string - username: - type: string - type: object - TaskResponse: - properties: - _links: - additionalProperties: - additionalProperties: - type: string - type: object - type: object - application_name: - type: string - id: - type: string - logs: - type: string - status: - type: string - type: object -info: - contact: {} - description: CSM Deployment API - title: CSM Deployment API - version: "1.0" -paths: - /applications: - get: - consumes: - - application/json - description: List all applications - operationId: list-applications - parameters: - - description: Application Name - in: query - name: name - type: string - produces: - - application/json - responses: - "200": - description: OK - schema: - items: - $ref: '#/definitions/ApplicationResponse' - type: array - "400": - description: Bad Request - schema: - $ref: '#/definitions/ErrorResponse' - "404": - description: Not Found - schema: - $ref: '#/definitions/ErrorResponse' - "500": - description: Internal Server Error - schema: - $ref: '#/definitions/ErrorResponse' - security: - - ApiKeyAuth: [] - summary: List all applications - tags: - - application - post: - consumes: - - application/json - description: Create a new application - operationId: create-application - parameters: - - description: Application info for creation - in: body - name: application - required: true - schema: - $ref: '#/definitions/ApplicationCreateRequest' - produces: - - application/json - responses: - "202": - description: Accepted - schema: - type: string - "400": - description: Bad Request - schema: - $ref: '#/definitions/ErrorResponse' - "404": - description: Not Found - schema: - $ref: '#/definitions/ErrorResponse' - "500": - description: Internal Server Error - schema: - $ref: '#/definitions/ErrorResponse' - security: - - ApiKeyAuth: [] - summary: Create a new application - tags: - - application - /applications/{id}: - delete: - consumes: - - application/json - description: Delete an application - operationId: delete-application - parameters: - - description: Application ID - in: path - name: id - required: true - type: string - produces: - - application/json - responses: - "204": - description: "" - "400": - description: Bad Request - schema: - $ref: '#/definitions/ErrorResponse' - "404": - description: Not Found - schema: - $ref: '#/definitions/ErrorResponse' - "500": - description: Internal Server Error - schema: - $ref: '#/definitions/ErrorResponse' - security: - - ApiKeyAuth: [] - summary: Delete an application - tags: - - application - get: - consumes: - - application/json - description: Get an application - operationId: get-application - parameters: - - description: Application ID - in: path - name: id - required: true - type: string - produces: - - application/json - responses: - "200": - description: OK - schema: - $ref: '#/definitions/ApplicationResponse' - "400": - description: Bad Request - schema: - $ref: '#/definitions/ErrorResponse' - "404": - description: Not Found - schema: - $ref: '#/definitions/ErrorResponse' - "500": - description: Internal Server Error - schema: - $ref: '#/definitions/ErrorResponse' - security: - - ApiKeyAuth: [] - summary: Get an application - tags: - - application - /clusters: - get: - consumes: - - application/json - description: List all clusters - operationId: list-clusters - parameters: - - description: Cluster Name - in: query - name: cluster_name - type: string - produces: - - application/json - responses: - "200": - description: OK - schema: - items: - $ref: '#/definitions/ClusterResponse' - type: array - "400": - description: Bad Request - schema: - $ref: '#/definitions/ErrorResponse' - "404": - description: Not Found - schema: - $ref: '#/definitions/ErrorResponse' - "500": - description: Internal Server Error - schema: - $ref: '#/definitions/ErrorResponse' - security: - - ApiKeyAuth: [] - summary: List all clusters - tags: - - cluster - post: - consumes: - - application/json - description: Create a new cluster - operationId: create-cluster - parameters: - - description: Name of the cluster - in: formData - name: name - required: true - type: string - - description: kube config file - in: formData - name: file - required: true - type: file - produces: - - application/json - responses: - "201": - description: Created - schema: - $ref: '#/definitions/ClusterResponse' - "400": - description: Bad Request - schema: - $ref: '#/definitions/ErrorResponse' - "404": - description: Not Found - schema: - $ref: '#/definitions/ErrorResponse' - "500": - description: Internal Server Error - schema: - $ref: '#/definitions/ErrorResponse' - security: - - ApiKeyAuth: [] - summary: Create a new cluster - tags: - - cluster - /clusters/{id}: - delete: - consumes: - - application/json - description: Delete a cluster - operationId: delete-cluster - parameters: - - description: Cluster ID - in: path - name: id - required: true - type: string - produces: - - application/json - responses: - "204": - description: "" - "400": - description: Bad Request - schema: - $ref: '#/definitions/ErrorResponse' - "404": - description: Not Found - schema: - $ref: '#/definitions/ErrorResponse' - "500": - description: Internal Server Error - schema: - $ref: '#/definitions/ErrorResponse' - security: - - ApiKeyAuth: [] - summary: Delete a cluster - tags: - - cluster - get: - consumes: - - application/json - description: Get a cluster - operationId: get-cluster - parameters: - - description: Cluster ID - in: path - name: id - required: true - type: string - produces: - - application/json - responses: - "200": - description: OK - schema: - $ref: '#/definitions/ClusterResponse' - "400": - description: Bad Request - schema: - $ref: '#/definitions/ErrorResponse' - "404": - description: Not Found - schema: - $ref: '#/definitions/ErrorResponse' - "500": - description: Internal Server Error - schema: - $ref: '#/definitions/ErrorResponse' - security: - - ApiKeyAuth: [] - summary: Get a cluster - tags: - - cluster - patch: - consumes: - - application/json - description: Update a cluster - operationId: update-cluster - parameters: - - description: Cluster ID - in: path - name: id - required: true - type: string - - description: Name of the cluster - in: formData - name: name - type: string - - description: kube config file - in: formData - name: file - type: file - produces: - - application/json - responses: - "200": - description: OK - schema: - $ref: '#/definitions/ClusterResponse' - "400": - description: Bad Request - schema: - $ref: '#/definitions/ErrorResponse' - "404": - description: Not Found - schema: - $ref: '#/definitions/ErrorResponse' - "500": - description: Internal Server Error - schema: - $ref: '#/definitions/ErrorResponse' - security: - - ApiKeyAuth: [] - summary: Update a cluster - tags: - - cluster - /configuration-files: - get: - consumes: - - application/json - description: List all configuration files - operationId: list-config-file - parameters: - - description: Name of the configuration file - in: query - name: config_name - type: string - produces: - - application/json - responses: - "200": - description: OK - schema: - items: - $ref: '#/definitions/ConfigFileResponse' - type: array - "400": - description: Bad Request - schema: - $ref: '#/definitions/ErrorResponse' - "404": - description: Not Found - schema: - $ref: '#/definitions/ErrorResponse' - "500": - description: Internal Server Error - schema: - $ref: '#/definitions/ErrorResponse' - security: - - ApiKeyAuth: [] - summary: List all configuration files - tags: - - configuration-file - post: - consumes: - - application/json - description: Create a new configuration file - operationId: create-config-file - parameters: - - description: Name of the configuration file - in: formData - name: name - required: true - type: string - - description: Configuration file - in: formData - name: file - required: true - type: file - produces: - - application/json - responses: - "201": - description: Created - schema: - $ref: '#/definitions/ConfigFileResponse' - "400": - description: Bad Request - schema: - $ref: '#/definitions/ErrorResponse' - "404": - description: Not Found - schema: - $ref: '#/definitions/ErrorResponse' - "500": - description: Internal Server Error - schema: - $ref: '#/definitions/ErrorResponse' - security: - - ApiKeyAuth: [] - summary: Create a new configuration file - tags: - - configuration-file - /configuration-files/{id}: - delete: - consumes: - - application/json - description: Delete a configuration file - operationId: delete-config-file - parameters: - - description: Configuration file ID - in: path - name: id - required: true - type: string - produces: - - application/json - responses: - "204": - description: "" - "400": - description: Bad Request - schema: - $ref: '#/definitions/ErrorResponse' - "404": - description: Not Found - schema: - $ref: '#/definitions/ErrorResponse' - "500": - description: Internal Server Error - schema: - $ref: '#/definitions/ErrorResponse' - security: - - ApiKeyAuth: [] - summary: Delete a configuration file - tags: - - configuration-file - get: - consumes: - - application/json - description: Get a configuration file - operationId: get-config-file - parameters: - - description: Configuration file ID - in: path - name: id - required: true - type: string - produces: - - application/json - responses: - "200": - description: OK - schema: - $ref: '#/definitions/ConfigFileResponse' - "400": - description: Bad Request - schema: - $ref: '#/definitions/ErrorResponse' - "404": - description: Not Found - schema: - $ref: '#/definitions/ErrorResponse' - "500": - description: Internal Server Error - schema: - $ref: '#/definitions/ErrorResponse' - security: - - ApiKeyAuth: [] - summary: Get a configuration file - tags: - - configuration-file - patch: - consumes: - - application/json - description: Update a configuration file - operationId: update-config-file - parameters: - - description: Configuration file ID - in: path - name: id - required: true - type: string - - description: Name of the configuration file - in: formData - name: name - required: true - type: string - - description: Configuration file - in: formData - name: file - required: true - type: file - produces: - - application/json - responses: - "204": - description: No Content - "400": - description: Bad Request - schema: - $ref: '#/definitions/ErrorResponse' - "404": - description: Not Found - schema: - $ref: '#/definitions/ErrorResponse' - "500": - description: Internal Server Error - schema: - $ref: '#/definitions/ErrorResponse' - security: - - ApiKeyAuth: [] - summary: Update a configuration file - tags: - - configuration-file - /driver-types: - get: - consumes: - - application/json - description: List all driver types - operationId: list-driver-types - produces: - - application/json - responses: - "200": - description: OK - schema: - items: - $ref: '#/definitions/DriverResponse' - type: array - "400": - description: Bad Request - schema: - $ref: '#/definitions/ErrorResponse' - "404": - description: Not Found - schema: - $ref: '#/definitions/ErrorResponse' - "500": - description: Internal Server Error - schema: - $ref: '#/definitions/ErrorResponse' - security: - - ApiKeyAuth: [] - summary: List all driver types - tags: - - driver-type - /driver-types/{id}: - get: - consumes: - - application/json - description: Get a driver type - operationId: get-driver-type - parameters: - - description: Driver Type ID - in: path - name: id - required: true - type: string - produces: - - application/json - responses: - "200": - description: OK - schema: - $ref: '#/definitions/DriverResponse' - "400": - description: Bad Request - schema: - $ref: '#/definitions/ErrorResponse' - "404": - description: Not Found - schema: - $ref: '#/definitions/ErrorResponse' - "500": - description: Internal Server Error - schema: - $ref: '#/definitions/ErrorResponse' - security: - - ApiKeyAuth: [] - summary: Get a driver type - tags: - - driver-type - /module-types: - get: - consumes: - - application/json - description: List all module types - operationId: list-module-type - produces: - - application/json - responses: - "200": - description: OK - schema: - items: - $ref: '#/definitions/ModuleResponse' - type: array - "400": - description: Bad Request - schema: - $ref: '#/definitions/ErrorResponse' - "404": - description: Not Found - schema: - $ref: '#/definitions/ErrorResponse' - "500": - description: Internal Server Error - schema: - $ref: '#/definitions/ErrorResponse' - security: - - ApiKeyAuth: [] - summary: List all module types - tags: - - module-type - /module-types/{id}: - get: - consumes: - - application/json - description: Get a module type - operationId: get-module-type - parameters: - - description: Module Type ID - in: path - name: id - required: true - type: string - produces: - - application/json - responses: - "200": - description: OK - schema: - $ref: '#/definitions/ModuleResponse' - "400": - description: Bad Request - schema: - $ref: '#/definitions/ErrorResponse' - "404": - description: Not Found - schema: - $ref: '#/definitions/ErrorResponse' - "500": - description: Internal Server Error - schema: - $ref: '#/definitions/ErrorResponse' - security: - - ApiKeyAuth: [] - summary: Get a module type - tags: - - module-type - /storage-array-types: - get: - consumes: - - application/json - description: List all storage array types - operationId: list-storage-array-type - produces: - - application/json - responses: - "200": - description: OK - schema: - items: - $ref: '#/definitions/StorageArrayTypeResponse' - type: array - "400": - description: Bad Request - schema: - $ref: '#/definitions/ErrorResponse' - "404": - description: Not Found - schema: - $ref: '#/definitions/ErrorResponse' - "500": - description: Internal Server Error - schema: - $ref: '#/definitions/ErrorResponse' - security: - - ApiKeyAuth: [] - summary: List all storage array types - tags: - - storage-array-type - /storage-array-types/{id}: - get: - consumes: - - application/json - description: Get a storage array type - operationId: get-storage-array-type - parameters: - - description: Storage Array Type ID - in: path - name: id - required: true - type: string - produces: - - application/json - responses: - "200": - description: OK - schema: - $ref: '#/definitions/StorageArrayTypeResponse' - "400": - description: Bad Request - schema: - $ref: '#/definitions/ErrorResponse' - "404": - description: Not Found - schema: - $ref: '#/definitions/ErrorResponse' - "500": - description: Internal Server Error - schema: - $ref: '#/definitions/ErrorResponse' - security: - - ApiKeyAuth: [] - summary: Get a storage array type - tags: - - storage-array-type - /storage-arrays: - get: - consumes: - - application/json - description: List all storage arrays - operationId: list-storage-arrays - parameters: - - description: Unique ID - in: query - name: unique_id - type: string - - description: Storage Type - in: query - name: storage_type - type: string - produces: - - application/json - responses: - "202": - description: Accepted - schema: - items: - $ref: '#/definitions/StorageArrayResponse' - type: array - "400": - description: Bad Request - schema: - $ref: '#/definitions/ErrorResponse' - "404": - description: Not Found - schema: - $ref: '#/definitions/ErrorResponse' - "500": - description: Internal Server Error - schema: - $ref: '#/definitions/ErrorResponse' - security: - - ApiKeyAuth: [] - summary: List all storage arrays - tags: - - storage-array - post: - consumes: - - application/json - description: Create a new storage array - operationId: create-storage-array - parameters: - - description: Storage Array info for creation - in: body - name: storageArray - required: true - schema: - $ref: '#/definitions/StorageArrayCreateRequest' - produces: - - application/json - responses: - "201": - description: Created - schema: - $ref: '#/definitions/StorageArrayResponse' - "400": - description: Bad Request - schema: - $ref: '#/definitions/ErrorResponse' - "404": - description: Not Found - schema: - $ref: '#/definitions/ErrorResponse' - "500": - description: Internal Server Error - schema: - $ref: '#/definitions/ErrorResponse' - security: - - ApiKeyAuth: [] - summary: Create a new storage array - tags: - - storage-array - /storage-arrays/{id}: - delete: - consumes: - - application/json - description: Delete storage array - operationId: delete-storage-array - parameters: - - description: Storage Array ID - in: path - name: id - required: true - type: string - produces: - - application/json - responses: - "200": - description: Success - schema: - type: string - "400": - description: Bad Request - schema: - $ref: '#/definitions/ErrorResponse' - "404": - description: Not Found - schema: - $ref: '#/definitions/ErrorResponse' - "500": - description: Internal Server Error - schema: - $ref: '#/definitions/ErrorResponse' - security: - - ApiKeyAuth: [] - summary: Delete storage array - tags: - - storage-array - get: - consumes: - - application/json - description: Get storage array - operationId: get-storage-array - parameters: - - description: Storage Array ID - in: path - name: id - required: true - type: string - produces: - - application/json - responses: - "200": - description: OK - schema: - $ref: '#/definitions/StorageArrayResponse' - "400": - description: Bad Request - schema: - $ref: '#/definitions/ErrorResponse' - "404": - description: Not Found - schema: - $ref: '#/definitions/ErrorResponse' - "500": - description: Internal Server Error - schema: - $ref: '#/definitions/ErrorResponse' - security: - - ApiKeyAuth: [] - summary: Get storage array - tags: - - storage-array - patch: - consumes: - - application/json - description: Update a storage array - operationId: update-storage-array - parameters: - - description: Storage Array ID - in: path - name: id - required: true - type: string - - description: Storage Array info for update - in: body - name: storageArray - required: true - schema: - $ref: '#/definitions/StorageArrayUpdateRequest' - produces: - - application/json - responses: - "204": - description: No Content - "400": - description: Bad Request - schema: - $ref: '#/definitions/ErrorResponse' - "404": - description: Not Found - schema: - $ref: '#/definitions/ErrorResponse' - "500": - description: Internal Server Error - schema: - $ref: '#/definitions/ErrorResponse' - security: - - ApiKeyAuth: [] - summary: Update a storage array - tags: - - storage-array - /tasks: - get: - consumes: - - application/json - description: List all tasks - operationId: list-tasks - parameters: - - description: Application Name - in: query - name: application_name - type: string - produces: - - application/json - responses: - "200": - description: OK - schema: - items: - $ref: '#/definitions/TaskResponse' - type: array - "400": - description: Bad Request - schema: - $ref: '#/definitions/ErrorResponse' - "404": - description: Not Found - schema: - $ref: '#/definitions/ErrorResponse' - "500": - description: Internal Server Error - schema: - $ref: '#/definitions/ErrorResponse' - security: - - ApiKeyAuth: [] - summary: List all tasks - tags: - - task - /tasks/{id}: - get: - consumes: - - application/json - description: Get a task - operationId: get-task - parameters: - - description: Task ID - in: path - name: id - required: true - type: string - produces: - - application/json - responses: - "200": - description: OK - schema: - $ref: '#/definitions/TaskResponse' - "303": - description: See Other - schema: - $ref: '#/definitions/TaskResponse' - "400": - description: Bad Request - schema: - $ref: '#/definitions/ErrorResponse' - "404": - description: Not Found - schema: - $ref: '#/definitions/ErrorResponse' - "500": - description: Internal Server Error - schema: - $ref: '#/definitions/ErrorResponse' - security: - - ApiKeyAuth: [] - summary: Get a task - tags: - - task - /tasks/{id}/approve: - post: - consumes: - - application/json - description: Approve state change for an application - operationId: approve-state-change-application - parameters: - - description: Task ID - in: path - name: id - required: true - type: string - - description: Task is associated with an Application update operation - in: query - name: updating - type: boolean - produces: - - application/json - responses: - "202": - description: Accepted - schema: - type: string - "400": - description: Bad Request - schema: - $ref: '#/definitions/ErrorResponse' - "404": - description: Not Found - schema: - $ref: '#/definitions/ErrorResponse' - "500": - description: Internal Server Error - schema: - $ref: '#/definitions/ErrorResponse' - security: - - ApiKeyAuth: [] - summary: Approve state change for an application - tags: - - task - /tasks/{id}/cancel: - post: - consumes: - - application/json - description: Cancel state change for an application - operationId: cancel-state-change-application - parameters: - - description: Task ID - in: path - name: id - required: true - type: string - - description: Task is associated with an Application update operation - in: query - name: updating - type: boolean - produces: - - application/json - responses: - "200": - description: Success - schema: - type: string - "400": - description: Bad Request - schema: - $ref: '#/definitions/ErrorResponse' - "404": - description: Not Found - schema: - $ref: '#/definitions/ErrorResponse' - "500": - description: Internal Server Error - schema: - $ref: '#/definitions/ErrorResponse' - security: - - ApiKeyAuth: [] - summary: Cancel state change for an application - tags: - - task - /users/change-password: - patch: - consumes: - - application/json - description: Change password for existing user - operationId: change-password - parameters: - - description: Enter New Password - format: password - in: query - name: password - required: true - type: string - produces: - - application/json - responses: - "204": - description: No Content - "401": - description: Unauthorized - schema: - $ref: '#/definitions/ErrorResponse' - "403": - description: Forbidden - schema: - $ref: '#/definitions/ErrorResponse' - "500": - description: Internal Server Error - schema: - $ref: '#/definitions/ErrorResponse' - security: - - BasicAuth: [] - summary: Change password for existing user - tags: - - user - /users/login: - post: - consumes: - - application/json - description: Login for existing user - operationId: login - produces: - - application/json - responses: - "200": - description: Bearer Token for Logged in User - schema: - type: string - "401": - description: Unauthorized - schema: - $ref: '#/definitions/ErrorResponse' - "403": - description: Forbidden - schema: - $ref: '#/definitions/ErrorResponse' - "500": - description: Internal Server Error - schema: - $ref: '#/definitions/ErrorResponse' - security: - - BasicAuth: [] - summary: Login for existing user - tags: - - user -securityDefinitions: - ApiKeyAuth: - in: header - name: Authorization - type: apiKey - BasicAuth: - type: basic -swagger: "2.0" diff --git a/content/docs/deployment/csminstaller/troubleshooting.md b/content/docs/deployment/csminstaller/troubleshooting.md deleted file mode 100644 index 3fa403c8da..0000000000 --- a/content/docs/deployment/csminstaller/troubleshooting.md +++ /dev/null @@ -1,49 +0,0 @@ ---- -title: "Troubleshooting" -linkTitle: "Troubleshooting" -weight: 3 -Description: > - Troubleshooting guide ---- - -## Frequently Asked Questions - - - [Why does the installation fail due to an invalid cipherKey value?](#why-does-the-installation-fail-due-to-an-invalid-cipherkey-value) - - [Why does the cluster-init pod show the error "cluster has already been initialized"?](#why-does-the-cluster-init-pod-show-the-error-cluster-has-already-been-initialized) - - [Why does the precheck fail when creating an application?](#why-does-the-precheck-fail-when-creating-an-application) - - [How can I view detailed logs for the CSM Installer?](#how-can-i-view-detailed-logs-for-the-csm-installer) - - [After deleting an application, why can't I re-create the same application?](#after-deleting-an-application-why-cant-i-re-create-the-same-application) - - [How can I upgrade CSM if I've used the CSM Installer to deploy CSM 1.0?](#how-can-i-upgrade-csm-if-ive-used-the-csm-installer-to-deploy-csm-10) - -### Why does the installation fail due to an invalid cipherKey value? -The `cipherKey` value used during deployment of the CSM Installer must be exactly 32 characters in length and contained within quotes. - -### Why does the cluster-init pod show the error "cluster has already been initialized"? -During the initial start-up of the CSM Installer, the database will be initialized by the cluster-init job. If the CSM Installer is uninstalled and then re-installed on the same cluster, this error may be shown due to the Persistent Volume for the database already containing an initialized database. The CSM Installer will function as normal and the cluster-init job can be ignored. - -If a clean installation of the CSM Installer is required, the `dbVolumeDirectory` (default location `/var/lib/cockroachdb`) must be deleted from the worker node which is hosting the Persistent Volume. After this directory is deleted, the CSM Installer can be re-installed. - -Caution: Deleting the `dbVolumeDirectory` location will remove any data persisted by the CSM Installer including clusters, storage systems, and installed applications. - -### Why does the precheck fail when creating an application? -Each CSI Driver and CSM Module has required software or CRDs that must be installed before the application can be deployed in the cluster. These prechecks are verified when the `csm create application` command is executed. If the error message "create application failed" is displayed, [review the CSM Installer logs](#how-can-i-view-detailed-logs-for-the-csm-installer) to view details about the failed prechecks. - -If the precheck fails due to required software (e.g. iSCSI, NFS, SDC) not installed on the cluster nodes, follow these steps to address the issue: -1. Delete the cluster from the CSM Installer using the `csm delete cluster` command. -2. Update the nodes in the cluster by installing required software. -3. Add the cluster to the CSM Installer using the `csm add cluster` command. - -### How can I view detailed logs for the CSM Installer? -Detailed logs of the CSM Installer can be displayed using the following command: -``` -kubectl logs -f -n deploy/dell-csm-installer -``` - -### After deleting an application, why can't I re-create the same application? -After deleting an application using the `csm delete application` command, the namespace and other non-application resources including Secrets are not deleted from the cluster. This is to prevent removing any resources that may not have been created by the CSM Installer. The namespace must be manually deleted before attempting to re-create the same application using the CSM Installer. - -### How can I upgrade CSM if I've used the CSM Installer to deploy CSM 1.0? -The CSM Installer currently does not support upgrade. If you used the CSM Installer to deploy CSM 1.0 you will need to perform the following steps to upgrade: -1. Using the CSM installer, [delete](../csmcli#delete-applicationtask) any driver/module applications that were installed (ex: `csm delete application --name `). -2. Uninstall the CSM Installer (ex: helm delete -n ) -3. Follow the deployment instructions [here](../../) to redeploy the CSI driver and modules. \ No newline at end of file diff --git a/content/docs/deployment/csmoperator/_index.md b/content/docs/deployment/csmoperator/_index.md index c89d7e9d74..38360d862e 100644 --- a/content/docs/deployment/csmoperator/_index.md +++ b/content/docs/deployment/csmoperator/_index.md @@ -6,10 +6,10 @@ weight: 1 --- {{% pageinfo color="primary" %}} -The Dell CSM Operator is currently in tech-preview and is not supported in production environments. It can be used in environments where no other Dell CSI Drivers or CSM Modules are installed. +The Dell Container Storage Modules Operator Operator is currently in tech-preview and is not supported in production environments. It can be used in environments where no other Dell CSI Drivers or CSM Modules are installed. {{% /pageinfo %}} -The Dell CSM Operator is a Kubernetes Operator, which can be used to install and manage the CSI Drivers and CSM Modules provided by Dell for various storage platforms. This operator is available as a community operator for upstream Kubernetes and can be deployed using OperatorHub.io. The operator can be installed using OLM (Operator Lifecycle Manager) or manually. +The Dell Container Storage Modules Operator Operator is a Kubernetes Operator, which can be used to install and manage the CSI Drivers and CSM Modules provided by Dell for various storage platforms. This operator is available as a community operator for upstream Kubernetes and can be deployed using OperatorHub.io. The operator can be installed using OLM (Operator Lifecycle Manager) or manually. ## Supported Platforms Dell CSM Operator has been tested and qualified on Upstream Kubernetes and OpenShift. Supported versions are listed below. @@ -29,6 +29,7 @@ Dell CSM Operator has been tested and qualified on Upstream Kubernetes and OpenS | CSM Modules | Version | ConfigVersion | | ------------------ | --------- | -------------- | | CSM Authorization | 1.2.0 + | v1.2.0 + | +| CSM Authorization | 1.3.0 + | v1.3.0 + | ## Installation Dell CSM Operator can be installed manually or via Operator Hub. @@ -62,7 +63,7 @@ Dell CSM Operator can be installed manually or via Operator Hub. {{< imgproc install_olm_pods.jpg Resize "2500x" >}}{{< /imgproc >}} ->**NOTE**: The recommended version of OLM for upstream Kubernetes is **`v0.18.2`**. +>**NOTE**: The recommended version of OLM for upstream Kubernetes is **`v0.18.3`**. ### Installation via Operator Hub `dell-csm-operator` can be installed via Operator Hub on upstream Kubernetes clusters & Red Hat OpenShift Clusters. diff --git a/content/docs/deployment/csmoperator/drivers/_index.md b/content/docs/deployment/csmoperator/drivers/_index.md index 18129d5071..91c428b596 100644 --- a/content/docs/deployment/csmoperator/drivers/_index.md +++ b/content/docs/deployment/csmoperator/drivers/_index.md @@ -37,7 +37,7 @@ kubectl create -f client/config/crd kubectl create -f deploy/kubernetes/snapshot-controller ``` *NOTE:* -- It is recommended to use 5.0.x version of snapshotter/snapshot-controller. +- It is recommended to use 6.0.x version of snapshotter/snapshot-controller. ## Installing CSI Driver via Operator diff --git a/content/docs/license/_index.md b/content/docs/license/_index.md new file mode 100644 index 0000000000..ec7bd9d734 --- /dev/null +++ b/content/docs/license/_index.md @@ -0,0 +1,20 @@ +--- +title: "License" +linkTitle: "License" +weight: 12 +Description: > + Dell Container Storage Modules (CSM) License +--- + +The tech-preview releases of [Container Storage Modules](https://github.com/dell/csm) for Application Mobility and Encryption require a license. This section details how to request a license. + +## Requesting a License +1. Request a license using the [Container Storage Modules License Request](https://app.smartsheet.com/b/form/5e46fad643874d56b1f9cf4c9f3071fb) by providing these details: +- **Full Name**: Full name of the person requesting the license +- **Email Address**: The license will be emailed to this email address +- **Company / Organization**: Company or organization where the license will be used +- **License Type**: Select either *Application Mobility* or *Encryption*, depending on the CSM module that will be used with the license +- **List of kube-system namespace UIDs**: The license will only function on the provided list of Kubernetes clusters. Find the UID of the kube-system namespace using `kubectl get ns kube-system -o yaml` or similar `oc` command. Provide as a comma separated list of UIDs. +- (Optional) **Send me a copy of my responses**: A copy of the license request will be sent to the provided email address +2. After submitting the form, a response will be provided within several business days with an attachment containing the license. +3. Refer to the specific CSM module documentation for adding the license to the Kubernetes cluster. \ No newline at end of file diff --git a/content/docs/observability/_index.md b/content/docs/observability/_index.md index 8f9f05fc63..cc8165d4a3 100644 --- a/content/docs/observability/_index.md +++ b/content/docs/observability/_index.md @@ -14,13 +14,14 @@ Description: > Metrics data is collected and pushed to the [OpenTelemetry Collector](https://github.com/open-telemetry/opentelemetry-collector), so it can be processed, and exported in a format consumable by Prometheus. SSL certificates for TLS between nodes are handled by [cert-manager](https://github.com/jetstack/cert-manager). -CSM for Observability is composed of several services, each living in its own GitHub repository, that can be installed following one of the three deployments we support [here](deployment). Contributions can be made to this repository or any of the CSM for Observability repositories listed below. +CSM for Observability is composed of several services, each living in its own GitHub repository, that can be installed following one of the four deployments we support [here](deployment). Contributions can be made to this repository or any of the CSM for Observability repositories listed below. {{}} | Name | Repository | Description | | ---- | --------- | ----------- | -| Performance Metrics for PowerFlex | [CSM Metrics for PowerFlex](https://github.com/dell/karavi-metrics-powerflex) | Performance Metrics for PowerFlex captures telemetry data about Kubernetes storage usage and performance obtained through the CSI (Container Storage Interface) Driver for Dell PowerFlex. The metrics service pushes it to the OpenTelemetry Collector, so it can be processed, and exported in a format consumable by Prometheus. Prometheus can then be configured to scrape the OpenTelemetry Collector exporter endpoint to provide metrics so they can be visualized in Grafana. Please visit the repository for more information. | -| Performance Metrics for PowerStore | [CSM Metrics for PowerStore](https://github.com/dell/csm-metrics-powerstore) | Performance Metrics for PowerStore captures telemetry data about Kubernetes storage usage and performance obtained through the CSI (Container Storage Interface) Driver for Dell PowerStore. The metrics service pushes it to the OpenTelemetry Collector, so it can be processed, and exported in a format consumable by Prometheus. Prometheus can then be configured to scrape the OpenTelemetry Collector exporter endpoint to provide metrics so they can be visualized in Grafana. Please visit the repository for more information. | +| Metrics for PowerFlex | [CSM Metrics for PowerFlex](https://github.com/dell/karavi-metrics-powerflex) | Metrics for PowerFlex captures telemetry data about Kubernetes storage usage and performance obtained through the CSI (Container Storage Interface) Driver for Dell PowerFlex. The metrics service pushes it to the OpenTelemetry Collector, so it can be processed, and exported in a format consumable by Prometheus. Prometheus can then be configured to scrape the OpenTelemetry Collector exporter endpoint to provide metrics, so they can be visualized in Grafana. Please visit the repository for more information. | +| Metrics for PowerStore | [CSM Metrics for PowerStore](https://github.com/dell/csm-metrics-powerstore) | Metrics for PowerStore captures telemetry data about Kubernetes storage usage and performance obtained through the CSI (Container Storage Interface) Driver for Dell PowerStore. The metrics service pushes it to the OpenTelemetry Collector, so it can be processed, and exported in a format consumable by Prometheus. Prometheus can then be configured to scrape the OpenTelemetry Collector exporter endpoint to provide metrics, so they can be visualized in Grafana. Please visit the repository for more information. | +| Metrics for PowerScale | [CSM Metrics for PowerScale](https://github.com/dell/csm-metrics-powerscale) | Metrics for PowerScale captures telemetry data about Kubernetes storage usage and performance obtained through the CSI (Container Storage Interface) Driver for Dell PowerScale. The metrics service pushes it to the OpenTelemetry Collector, so it can be processed, and exported in a format consumable by Prometheus. Prometheus can then be configured to scrape the OpenTelemetry Collector exporter endpoint to provide metrics, so they can be visualized in Grafana. Please visit the repository for more information. | | Volume Topology | [CSM Topology](https://github.com/dell/karavi-topology) | Topology provides Kubernetes administrators with the topology data related to containerized storage that is provisioned by a CSI (Container Storage Interface) Driver for Dell storage products. The Topology service is enabled by default as part of the CSM for Observability Helm Chart [values file](https://github.com/dell/helm-charts/blob/main/charts/karavi-observability/values.yaml). Please visit the repository for more information. | {{
}} @@ -31,14 +32,14 @@ CSM for Observability provides the following capabilities: {{}} | Capability | PowerMax | PowerFlex | Unity XT | PowerScale | PowerStore | | - | :-: | :-: | :-: | :-: | :-: | -| Collect and expose Volume Metrics via the OpenTelemetry Collector | no | yes | no | no | yes | +| Collect and expose Volume Metrics via the OpenTelemetry Collector | no | yes | no | yes | yes | | Collect and expose File System Metrics via the OpenTelemetry Collector | no | no | no | no | yes | | Collect and expose export (k8s) node metrics via the OpenTelemetry Collector | no | yes | no | no | no | -| Collect and expose filesystem capacity metrics via the OpenTelemetry Collector | no | no | no | no | yes | -| Collect and expose block storage capacity metrics via the OpenTelemetry Collector | no | yes | no | no | yes | -| Non-disruptive config changes | no | yes | no | no | yes | -| Non-disruptive log level changes | no | yes | no | no | yes | -| Grafana Dashboards for displaying metrics and topology data | no | yes | no | no | yes | +| Collect and expose block storage metrics via the OpenTelemetry Collector | no | yes | no | no | yes | +| Collect and expose file storage metrics via the OpenTelemetry Collector | no | no | no | yes | yes | +| Non-disruptive config changes | no | yes | no | yes | yes | +| Non-disruptive log level changes | no | yes | no | yes | yes | +| Grafana Dashboards for displaying metrics and topology data | no | yes | no | yes | yes | {{
}} ## Supported Operating Systems/Container Orchestrator Platforms @@ -56,9 +57,9 @@ CSM for Observability provides the following capabilities: ## Supported Storage Platforms {{}} -| | PowerFlex | PowerStore | -|---------------|:-------------------:|:----------------:| -| Storage Array | 3.5.x, 3.6.x | 1.0.x, 2.0.x, 2.1.x | +| | PowerFlex | PowerStore | PowerScale | +|---------------|:-------------------:|:----------------:|:----------------:| +| Storage Array | 3.5.x, 3.6.x | 1.0.x, 2.0.x, 2.1.x, 3.0 | OneFS 8.1, 8.2, 9.0, 9.1, 9.2, 9.3, 9.4 | {{
}} ## Supported CSI Drivers @@ -69,6 +70,7 @@ CSM for Observability supports the following CSI drivers and versions. | ------------- | ---------- | ------------------ | | CSI Driver for Dell PowerFlex | [csi-powerflex](https://github.com/dell/csi-powerflex) | v2.0 + | | CSI Driver for Dell PowerStore | [csi-powerstore](https://github.com/dell/csi-powerstore) | v2.0 + | +| CSI Driver for Dell PowerScale | [csi-powerscale](https://github.com/dell/csi-powerscale) | v2.0 + | {{}} ## Topology Data @@ -78,17 +80,16 @@ CSM for Observability provides Kubernetes administrators with the topology data | Field | Description | | -------------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------- | | Namespace | The namespace associated with the persistent volume claim | +| Persistent Volume Claim | The name of the persistent volume claim associated with the persistent volume | | Persistent Volume | The name of the persistent volume | +| Storage Class | The storage class associated with the persistent volume | +| Provisioned Size | The provisioned size of the persistent volume | | Status | The status of the persistent volume. "Released" indicates the persistent volume does not have a claim. "Bound" indicates the persistent volume has a claim | -| Persistent Volume Claim | The name of the persistent volume claim associated with the persistent volume | -| CSI Driver | The name of the CSI driver that was responsible for provisioning the volume on the storage system | | Created | The date the persistent volume was created | -| Provisioned Size | The provisioned size of the persistent volume | -| Storage Class | The storage class associated with the persistent volume | -| Storage System Volume Name | The name of the volume on the storage system that is associated with the persistent volume | -| Storage Pool | The storage pool name the volume/storage class is associated with | | Storage System | The storage system ID or IP address the volume is associated with | | Protocol | The storage system protocol type the volume/storage class is associated with | +| Storage Pool | The storage pool name the volume/storage class is associated with | +| Storage System Volume Name | The name of the volume on the storage system that is associated with the persistent volume | {{}} ## TLS Encryption diff --git a/content/docs/observability/deployment/_index.md b/content/docs/observability/deployment/_index.md index 50efaa2c3f..62b10741bb 100644 --- a/content/docs/observability/deployment/_index.md +++ b/content/docs/observability/deployment/_index.md @@ -239,8 +239,8 @@ Below are the steps to deploy a new Grafana instance into your Kubernetes cluste dashboards: enabled: true - ## Additional grafana server CofigMap mounts - ## Defines additional mounts with CofigMap. CofigMap must be manually created in the namespace. + ## Additional grafana server ConfigMap mounts + ## Defines additional mounts with ConfigMap. ConfigMap must be manually created in the namespace. extraConfigmapMounts: [] # If you created a ConfigMap on the previous step, delete [] and uncomment the lines below # - name: certs-configmap # mountPath: /etc/ssl/certs/ca-certificates.crt @@ -275,23 +275,29 @@ Below are the steps to deploy a new Grafana instance into your Kubernetes cluste Once Grafana is properly configured, you can import the pre-built observability dashboards. Log into Grafana and click the + icon in the side menu. Then click Import. From here you can upload the JSON files or paste the JSON text directly into the text area. Below are the locations of the dashboards that can be imported: -| Dashboard | Description | -| ------------------- | --------------------------------- | -| [PowerFlex: I/O Performance by Kubernetes Node](https://github.com/dell/karavi-observability/blob/main/grafana/dashboards/powerflex/sdc_io_metrics.json) | Provides visibility into the I/O performance metrics (IOPS, bandwidth, latency) by Kubernetes node | -| [PowerFlex: I/O Performance by Provisioned Volume](https://github.com/dell/karavi-observability/blob/main/grafana/dashboards/powerflex/volume_io_metrics.json) | Provides visibility into the I/O performance metrics (IOPS, bandwidth, latency) by volume | -| [PowerFlex: Storage Pool Consumption By CSI Driver](https://github.com/dell/karavi-observability/blob/main/grafana/dashboards/powerflex/storage_consumption.json) | Provides visibility into the total, used, and available capacity for a storage class and associated underlying storage construct. | -| [PowerStore: I/O Performance by Provisioned Volume](https://github.com/dell/karavi-observability/blob/main/grafana/dashboards/powerstore/volume_io_metrics.json) | *As of Release 0.4.0:* Provides visibility into the I/O performance metrics (IOPS, bandwidth, latency) by volume | -| [CSI Driver Provisioned Volume Topology](https://github.com/dell/karavi-observability/blob/main/grafana/dashboards/topology/topology.json) | Provides visibility into Dell CSI (Container Storage Interface) driver provisioned volume characteristics in Kubernetes correlated with volumes on the storage system. | +| Dashboard | Description | +|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------| +| [PowerFlex: I/O Performance by Kubernetes Node](https://github.com/dell/karavi-observability/blob/main/grafana/dashboards/powerflex/sdc_io_metrics.json) | Provides visibility into the I/O performance metrics (IOPS, bandwidth, latency) by Kubernetes node | +| [PowerFlex: I/O Performance by Provisioned Volume](https://github.com/dell/karavi-observability/blob/main/grafana/dashboards/powerflex/volume_io_metrics.json) | Provides visibility into the I/O performance metrics (IOPS, bandwidth, latency) by volume | +| [PowerFlex: Storage Pool Consumption By CSI Driver](https://github.com/dell/karavi-observability/blob/main/grafana/dashboards/powerflex/storage_consumption.json) | Provides visibility into the total, used and available capacity for a storage class and associated underlying storage construct | +| [PowerStore: I/O Performance by Provisioned Volume](https://github.com/dell/karavi-observability/blob/main/grafana/dashboards/powerstore/volume_io_metrics.json) | Provides visibility into the I/O performance metrics (IOPS, bandwidth, latency) by volume | +| [PowerStore: I/O Performance by File System](https://github.com/dell/karavi-observability/blob/main/grafana/dashboards/powerstore/filesystem_io_metrics.json) | Provides visibility into the I/O performance metrics (IOPS, bandwidth, latency) by filesystem | +| [PowerStore: Array and Storage Class Consumption By CSI Driver](https://github.com/dell/karavi-observability/blob/main/grafana/dashboards/powerstore/storage_consumption.json) | Provides visibility into the total, used and available capacity for a storage class and associated underlying storage construct | +| [PowerScale: I/O Performance by Cluster](https://github.com/dell/karavi-observability/blob/main/grafana/dashboards/powerscale/cluster_io_metrics.json) | Provides visibility into the I/O performance metrics (IOPS, bandwidth) by cluster | +| [PowerScale: Capacity by Cluster](https://github.com/dell/karavi-observability/blob/main/grafana/dashboards/powerscale/cluster_capacity.json) | Provides visibility into the total, used, available capacity and directory quota capacity by cluster | +| [PowerScale: Capacity by Quota](https://github.com/dell/karavi-observability/blob/main/grafana/dashboards/powerscale/volume_capacity.json) | Provides visibility into the subscribed, remaining capacity and usage by quota | +| [CSI Driver Provisioned Volume Topology](https://github.com/dell/karavi-observability/blob/main/grafana/dashboards/topology/topology.json) | Provides visibility into Dell CSI (Container Storage Interface) driver provisioned volume characteristics in Kubernetes correlated with volumes on the storage system. | ## Dynamic Configuration Some parameters can be configured/updated during runtime without restarting the CSM for Observability services. These parameters will be stored in ConfigMaps that can be updated on the Kubernetes cluster. This will automatically change the settings on the services. -| ConfigMap | Observability Service | Parameters | -| - | - | - | -| karavi-metrics-powerflex-configmap | karavi-metrics-powerflex |
  • COLLECTOR_ADDR
  • PROVISIONER_NAMES
  • POWERFLEX_SDC_METRICS_ENABLED
  • POWERFLEX_SDC_IO_POLL_FREQUENCY
  • POWERFLEX_VOLUME_IO_POLL_FREQUENCY
  • POWERFLEX_VOLUME_METRICS_ENABLED
  • POWERFLEX_STORAGE_POOL_METRICS_ENABLED
  • POWERFLEX_STORAGE_POOL_POLL_FREQUENCY
  • POWERFLEX_MAX_CONCURRENT_QUERIES
  • LOG_LEVEL
  • LOG_FORMAT
| -| karavi-metrics-powerstore-configmap | karavi-metrics-powerstore |
  • COLLECTOR_ADDR
  • PROVISIONER_NAMES
  • POWERSTORE_VOLUME_METRICS_ENABLED
  • POWERSTORE_VOLUME_IO_POLL_FREQUENCY
  • POWERSTORE_SPACE_POLL_FREQUENCY
  • POWERSTORE_ARRAY_POLL_FREQUENCY
  • POWERSTORE_FILE_SYSTEM_POLL_FREQUENCY
  • POWERSTORE_MAX_CONCURRENT_QUERIES
  • LOG_LEVEL
  • LOG_FORMAT
  • ZIPKIN_URI
  • ZIPKIN_SERVICE_NAME
  • ZIPKIN_PROBABILITY
| -| karavi-topology-configmap | karavi-topology |
  • PROVISIONER_NAMES
  • LOG_LEVEL
  • LOG_FORMAT
  • ZIPKIN_URI
  • ZIPKIN_SERVICE_NAME
  • ZIPKIN_PROBABILITY
| +| ConfigMap | Observability Service | Parameters | +|-------------------------------------|---------------------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| +| karavi-metrics-powerflex-configmap | karavi-metrics-powerflex |
  • COLLECTOR_ADDR
  • PROVISIONER_NAMES
  • POWERFLEX_SDC_METRICS_ENABLED
  • POWERFLEX_SDC_IO_POLL_FREQUENCY
  • POWERFLEX_VOLUME_IO_POLL_FREQUENCY
  • POWERFLEX_VOLUME_METRICS_ENABLED
  • POWERFLEX_STORAGE_POOL_METRICS_ENABLED
  • POWERFLEX_STORAGE_POOL_POLL_FREQUENCY
  • POWERFLEX_MAX_CONCURRENT_QUERIES
  • LOG_LEVEL
  • LOG_FORMAT
| +| karavi-metrics-powerstore-configmap | karavi-metrics-powerstore |
  • COLLECTOR_ADDR
  • PROVISIONER_NAMES
  • POWERSTORE_VOLUME_METRICS_ENABLED
  • POWERSTORE_VOLUME_IO_POLL_FREQUENCY
  • POWERSTORE_SPACE_POLL_FREQUENCY
  • POWERSTORE_ARRAY_POLL_FREQUENCY
  • POWERSTORE_FILE_SYSTEM_POLL_FREQUENCY
  • POWERSTORE_MAX_CONCURRENT_QUERIES
  • LOG_LEVEL
  • LOG_FORMAT
  • ZIPKIN_URI
  • ZIPKIN_SERVICE_NAME
  • ZIPKIN_PROBABILITY
| +| karavi-metrics-powerscale-configmap | karavi-metrics-powerscale |
  • COLLECTOR_ADDR
  • PROVISIONER_NAMES
  • POWERSCALE_MAX_CONCURRENT_QUERIES
  • POWERSCALE_CAPACITY_METRICS_ENABLED
  • POWERSCALE_PERFORMANCE_METRICS_ENABLED
  • POWERSCALE_CLUSTER_CAPACITY_POLL_FREQUENCY
  • POWERSCALE_CLUSTER_PERFORMANCE_POLL_FREQUENCY
  • POWERSCALE_QUOTA_CAPACITY_POLL_FREQUENCY
  • POWERSCALE_ISICLIENT_INSECURE
  • POWERSCALE_ISICLIENT_AUTH_TYPE
  • POWERSCALE_ISICLIENT_VERBOSE
  • LOG_LEVEL
  • LOG_FORMAT
| +| karavi-topology-configmap | karavi-topology |
  • PROVISIONER_NAMES
  • LOG_LEVEL
  • LOG_FORMAT
  • ZIPKIN_URI
  • ZIPKIN_SERVICE_NAME
  • ZIPKIN_PROBABILITY
| To update any of these settings, run the following command on the Kubernetes cluster then save the updated ConfigMap data. @@ -387,29 +393,57 @@ In this case, all storage system requests made by CSM for Observability will be #### Update the Authorization Module Token +##### CSI Driver for Dell PowerFlex + 1. Delete the current `proxy-authz-tokens` Secret from the CSM namespace. ```console $ kubectl delete secret proxy-authz-tokens -n [CSM_NAMESPACE] ``` -2. Copy the `proxy-authz-tokens` Secret from a CSI Driver to the CSM namespace. +2. Copy the `proxy-authz-tokens` Secret from the CSI Driver for Dell PowerFlex to the CSM namespace. ```console $ kubectl get secret proxy-authz-tokens -n [CSI_DRIVER_NAMESPACE] -o yaml | sed 's/namespace: [CSM_CSI_DRIVER_NAMESPACE]/namespace: [CSM_NAMESPACE]/' | kubectl create -f - ``` +##### CSI Driver for Dell PowerScale + +1. Delete the current `isilon-proxy-authz-tokens` Secret from the CSM namespace. + ```console + $ kubectl delete secret isilon-proxy-authz-tokens -n [CSM_NAMESPACE] + ``` + +2. Copy the `isilon-proxy-authz-tokens` Secret from the CSI Driver for Dell PowerScale namespace to the CSM namespace. + ```console + $ kubectl get secret proxy-authz-tokens -n [CSI_DRIVER_NAMESPACE] -o yaml | sed 's/namespace: [CSM_CSI_DRIVER_NAMESPACE]/namespace: [CSM_NAMESPACE]/'| sed 's/name: proxy-authz-tokens/name: isilon-proxy-authz-tokens/' | kubectl create -f + ``` + #### Update Storage Systems If the list of storage systems managed by a Dell CSI Driver have changed, the following steps can be performed to update CSM for Observability to reference the updated systems: +##### CSI Driver for Dell PowerFlex + 1. Delete the current `karavi-authorization-config` Secret from the CSM namespace. ```console $ kubectl delete secret proxy-authz-tokens -n [CSM_NAMESPACE] ``` -2. Copy the `karavi-authorization-config` Secret from the CSI Driver namespace to CSM for Observability namespace. +2. Copy the `karavi-authorization-config` Secret from the CSI Driver for Dell PowerFlex namespace to CSM for Observability namespace. ```console $ kubectl get secret karavi-authorization-config -n [CSI_DRIVER_NAMESPACE] -o yaml | sed 's/namespace: [CSM_CSI_DRIVER_NAMESPACE]/namespace: [CSM_NAMESPACE]/' | kubectl create -f - ``` +##### CSI Driver for Dell PowerScale + +1. Delete the current `isilon-karavi-authorization-config` Secret from the CSM namespace. + ```console + $ kubectl delete secret isilon-karavi-authorization-config -n [CSM_NAMESPACE] + ``` + +2. Copy the isilon-karavi-authorization-config Secret from the CSI Driver for Dell PowerScale namespace to CSM for Observability namespace. + ```console + $ kubectl get secret karavi-authorization-config -n [CSI_DRIVER_NAMESPACE] -o yaml | sed 's/namespace: [CSM_CSI_DRIVER_NAMESPACE]/namespace: [CSM_NAMESPACE]/' | sed 's/name: karavi-authorization-config/name: isilon-karavi-authorization-config/' | kubectl create -f + ``` + ### When CSM for Observability does not use the Authorization module In this case all storage system requests made by CSM for Observability will not be routed through the Authorization module. The following must be performed: @@ -437,3 +471,15 @@ In this case all storage system requests made by CSM for Observability will not ```console $ kubectl get secret powerstore-config -n [CSI_DRIVER_NAMESPACE] -o yaml | sed 's/namespace: [CSI_DRIVER_NAMESPACE]/namespace: [CSM_NAMESPACE]/' | kubectl create -f - ``` + +### CSI Driver for Dell PowerScale + +1. Delete the current `isilon-creds` Secret from the CSM namespace. + ```console + $ kubectl delete secret isilon-creds -n [CSM_NAMESPACE] + ``` + +2. Copy the `isilon-creds` Secret from the CSI Driver for Dell PowerScale namespace to the CSM namespace. + ```console + $ kubectl get secret isilon-creds -n [CSI_DRIVER_NAMESPACE] -o yaml | sed 's/namespace: [CSI_DRIVER_NAMESPACE]/namespace: [CSM_NAMESPACE]/' | kubectl create -f - + ``` \ No newline at end of file diff --git a/content/docs/observability/deployment/helm.md b/content/docs/observability/deployment/helm.md index 02feb6186f..6433b60836 100644 --- a/content/docs/observability/deployment/helm.md +++ b/content/docs/observability/deployment/helm.md @@ -22,7 +22,8 @@ The Container Storage Modules (CSM) for Observability Helm chart bootstraps an O 3. Add the Dell Helm Charts repo `helm repo add dell https://dell.github.io/helm-charts` 4. Copy only the deployed CSI driver entities to the Observability namespace - #### PowerFlex + + ### PowerFlex 1. Copy the config Secret from the CSI PowerFlex namespace into the CSM for Observability namespace: @@ -38,12 +39,30 @@ The Container Storage Modules (CSM) for Observability Helm chart bootstraps an O `kubectl get secret karavi-authorization-config proxy-server-root-certificate proxy-authz-tokens -n [CSI_DRIVER_NAMESPACE] -o yaml | sed 's/namespace: [CSI_DRIVER_NAMESPACE]/namespace: [CSM_NAMESPACE]/' | kubectl create -f -` - #### PowerStore + ### PowerStore 1. Copy the config Secret from the CSI PowerStore namespace into the CSM for Observability namespace: `kubectl get secret powerstore-config -n [CSI_DRIVER_NAMESPACE] -o yaml | sed 's/namespace: [CSI_DRIVER_NAMESPACE]/namespace: [CSM_NAMESPACE]/' | kubectl create -f -` + ### PowerScale + + 1. Copy the config Secret from the CSI PowerScale namespace into the CSM for Observability namespace: + + `kubectl get secret isilon-creds -n [CSI_DRIVER_NAMESPACE] -o yaml | sed 's/namespace: [CSI_DRIVER_NAMESPACE]/namespace: [CSM_NAMESPACE]/' | kubectl create -f -` + + If [CSM for Authorization is enabled](../../../authorization/deployment/#configuring-a-dell-csi-driver-with-csm-for-authorization) for CSI PowerScale, perform these steps: + + 2. Copy the driver configuration parameters ConfigMap from the CSI PowerScale namespace into the CSM for Observability namespace: + + `kubectl get configmap isilon-config-params -n [CSI_DRIVER_NAMESPACE] -o yaml | sed 's/namespace: [CSI_DRIVER_NAMESPACE]/namespace: [CSM_NAMESPACE]/' | kubectl create -f -` + + 3. Copy the `karavi-authorization-config`, `proxy-server-root-certificate`, `proxy-authz-tokens` Secret from the CSI PowerScale namespace into the CSM for Observability namespace: + + `kubectl get secret karavi-authorization-config proxy-server-root-certificate proxy-authz-tokens -n [CSI_DRIVER_NAMESPACE] -o yaml | sed 's/namespace: [CSI_DRIVER_NAMESPACE]/namespace: [CSM_NAMESPACE]/' | sed 's/name: karavi-authorization-config/name: isilon-karavi-authorization-config/' | sed 's/name: proxy-server-root-certificate/name: isilon-proxy-server-root-certificate/' | sed 's/name: proxy-authz-tokens/name: isilon-proxy-authz-tokens/' | kubectl create -f -` + + + 5. Configure the [parameters](#configuration) and install the CSM for Observability Helm Chart A default values.yaml file is located [here](https://github.com/dell/helm-charts/blob/main/charts/karavi-observability/values.yaml) that can be used for installation. This can be copied into a file named `myvalues.yaml` and either used as is or modified accordingly. @@ -51,6 +70,7 @@ The Container Storage Modules (CSM) for Observability Helm chart bootstraps an O __Note:__ - The default `values.yaml` is configured to deploy the CSM for Observability Topology service on install. - If CSM for Authorization is enabled for CSI PowerFlex, the `karaviMetricsPowerflex.authorization` parameters must be properly configured in your values file for CSM Observability. + - If CSM for Authorization is enabled for CSI PowerScale, the `karaviMetricsPowerscale.authorization` parameters must be properly configured in your values file for CSM Observability. ```console $ helm install karavi-observability dell/karavi-observability -n [CSM_NAMESPACE] -f myvalues.yaml @@ -106,7 +126,7 @@ The following table lists the configurable parameters of the CSM for Observabili | `karaviMetricsPowerstore.collectorAddr` | Metrics Collector accessible from the Kubernetes cluster | `otel-collector:55680` | | `karaviMetricsPowerstore.provisionerNames` | Provisioner Names used to filter for determining PowerStore volumes (must be a Comma-separated list) | `csi-powerstore.dellemc.com` | | `karaviMetricsPowerstore.volumePollFrequencySeconds` | The polling frequency (in seconds) to gather volume metrics | `10` | -| `karaviMetricsPowerstore.concurrentPowerflexQueries` | The number of simultaneous metrics queries to make to PowerStore (must be less than 10; otherwise, several request errors from PowerStore will ensue.) | `10` | +| `karaviMetricsPowerstore.concurrentPowerstoreQueries` | The number of simultaneous metrics queries to make to PowerStore (must be less than 10; otherwise, several request errors from PowerStore will ensue.) | `10` | | `karaviMetricsPowerstore.volumeMetricsEnabled` | Enable PowerStore Volume Metrics Collection | `true` | | `karaviMetricsPowerstore.endpoint` | Endpoint for pod leader election | `karavi-metrics-powerstore` | | `karaviMetricsPowerstore.service.type` | Kubernetes service type | `ClusterIP` | @@ -115,3 +135,23 @@ The following table lists the configurable parameters of the CSM for Observabili | `karaviMetricsPowerstore.zipkin.uri` | URI of a Zipkin instance where tracing data can be forwarded | | | `karaviMetricsPowerstore.zipkin.serviceName` | Service name used for Zipkin tracing data | `metrics-powerstore`| | `karaviMetricsPowerstore.zipkin.probability` | Percentage of trace information to send to Zipkin (Valid range: 0.0 to 1.0) | `0` | +| `karaviMetricsPowerscale.image` | CSM Metrics for PowerScale Service image | `dellemc/csm-metrics-powerscale:v1.0`| +| `karaviMetricsPowerscale.enabled` | Enable CSM Metrics for PowerScale service | `true` | +| `karaviMetricsPowerscale.collectorAddr` | Metrics Collector accessible from the Kubernetes cluster | `otel-collector:55680` | +| `karaviMetricsPowerscale.provisionerNames` | Provisioner Names used to filter for determining PowerScale volumes (must be a Comma-separated list) | `csi-isilon.dellemc.com` | +| `karaviMetricsPowerscale.capacityMetricsEnabled` | Enable PowerScale capacity metric Collection | `true` | +| `karaviMetricsPowerscale.performanceMetricsEnabled` | Enable PowerScale performance metric Collection | `true` | +| `karaviMetricsPowerscale.clusterCapacityPollFrequencySeconds` | The polling frequency (in seconds) to gather cluster capacity metrics | `30` | +| `karaviMetricsPowerscale.clusterPerformancePollFrequencySeconds` | The polling frequency (in seconds) to gather cluster performance metrics | `20` | +| `karaviMetricsPowerscale.quotaCapacityPollFrequencySeconds` | The polling frequency (in seconds) to gather volume capacity metrics | `30` | +| `karaviMetricsPowerscale.concurrentPowerscaleQueries` | The number of simultaneous metrics queries to make to PowerScale(MUST be less than 10; otherwise, several request errors from PowerScale will ensue. | `10` | +| `karaviMetricsPowerscale.endpoint` | Endpoint for pod leader election | `karavi-metrics-powerscale` | +| `karaviMetricsPowerscale.service.type` | Kubernetes service type | `ClusterIP` | +| `karaviMetricsPowerscale.logLevel` | Output logs that are at or above the given log level severity (Valid values: TRACE, DEBUG, INFO, WARN, ERROR, FATAL, PANIC) | `INFO`| +| `karaviMetricsPowerscale.logFormat` | Output logs in the specified format (Valid values: text, json) | `text` | +| `karaviMetricsPowerscale.isiClientOptions.isiSkipCertificateValidation` | Skip OneFS API server's certificates | `true` | +| `karaviMetricsPowerscale.isiClientOptions.isiAuthType` | 0 to enable session-based Authentication; 1 to enables basic Authentication | `1` | +| `karaviMetricsPowerscale.isiClientOptions.isiLogVerbose` | Decide High/Medium/Low content of the OneFS REST API message | `0` | +| `karaviMetricsPowerscale.authorization.enabled` | [Authorization](../../../authorization) is an optional feature to apply credential shielding of the backend PowerScale. | `false` | +| `karaviMetricsPowerscale.authorization.proxyHost` | Hostname of the csm-authorization server. | | +| `karaviMetricsPowerscale.authorization.skipCertificateValidation` | A boolean that enables/disables certificate validation of the csm-authorization server. | | diff --git a/content/docs/observability/deployment/offline.md b/content/docs/observability/deployment/offline.md index b4c5ccd9d6..16b93f1bac 100644 --- a/content/docs/observability/deployment/offline.md +++ b/content/docs/observability/deployment/offline.md @@ -24,9 +24,9 @@ If one Linux system has both internet access and access to an internal registry, Preparing an offline bundle requires the following utilities: -| Dependency | Usage | -| --------------------- | ----- | -| `docker` | `docker` will be used to pull images from public image registries, tag them, and push them to a private registry.
Required on both the system building the offline bundle as well as the system preparing for installation.
Tested version is `docker` 18.09 +| Dependency | Usage | +|------------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| +| `docker` | `docker` will be used to pull images from public image registries, tag them, and push them to a private registry.
Required on both the system building the offline bundle as well as the system preparing for installation.
Tested version is `docker` 18.09+ | ### Executing the Installer @@ -72,10 +72,12 @@ To perform an offline installation of a Helm chart, the following steps should b * * Downloading and saving Docker images - dellemc/csm-topology:v0.3.0 - dellemc/csm-metrics-powerflex:v0.3.0 - otel/opentelemetry-collector:0.9.0 - nginxinc/nginx-unprivileged:1.18 + dellemc/csm-topology:v1.3.0 + dellemc/csm-metrics-powerflex:v1.3.0 + dellemc/csm-metrics-powerstore:v1.3.0 + dellemc/csm-metrics-powerscale:v1.3.0 + otel/opentelemetry-collector:0.42.0 + nginxinc/nginx-unprivileged:1.20 * * Compressing offline-karavi-observability-bundle.tar.gz @@ -103,10 +105,12 @@ To perform an offline installation of a Helm chart, the following steps should b * * Loading, tagging, and pushing Docker images to registry :5000/ - dellemc/csm-topology:v0.3.0 -> :5000/csm-topology:v0.3.0 - dellemc/csm-metrics-powerflex:v0.3.0 -> :5000/csm-metrics-powerflex:v0.3.0 - otel/opentelemetry-collector:0.9.0 -> :5000/opentelemetry-collector:0.9.0 - nginxinc/nginx-unprivileged:1.18 -> :5000/nginx-unprivileged:1.18 + dellemc/csm-topology:v1.3.0 -> :5000/csm-topology:v1.3.0 + dellemc/csm-metrics-powerflex:v1.3.0 -> :5000/csm-metrics-powerflex:v1.3.0 + dellemc/csm-metrics-powerstore:v1.3.0 -> :5000/csm-metrics-powerstore:v1.3.0 + dellemc/csm-metrics-powerscale:v1.3.0 -> :5000/csm-metrics-powerscale:v1.3.0 + otel/opentelemetry-collector:0.42.0 -> :5000/opentelemetry-collector:0.42.0 + nginxinc/nginx-unprivileged:1.20 -> :5000/nginx-unprivileged:1.20 ``` ### Perform Helm installation @@ -145,12 +149,28 @@ To perform an offline installation of a Helm chart, the following steps should b [user@anothersystem /home/user/offline-karavi-observability-bundle/helm]# kubectl get secret powerstore-config -n [CSI_DRIVER_NAMESPACE] -o yaml | sed 's/namespace: [CSI_DRIVER_NAMESPACE]/namespace: [CSM_NAMESPACE]/' | kubectl create -f - ``` -4. Now that the required images have been made available and the Helm chart's configuration updated with references to the internal registry location, installation can proceed by following the instructions that are documented within the Helm chart's repository. + CSI Driver for PowerScale: + ``` + [user@anothersystem /home/user/offline-karavi-observability-bundle/helm]# kubectl get secret isilon-creds -n [CSI_DRIVER_NAMESPACE] -o yaml | sed 's/namespace: [CSI_DRIVER_NAMESPACE]/namespace: [CSM_NAMESPACE]/' | kubectl create -f - + ``` + + If [CSM for Authorization is enabled](../../../authorization/deployment/#configuring-a-dell-csi-driver-with-csm-for-authorization) for CSI PowerScale, perform these steps: + + ``` + [user@anothersystem /home/user/offline-karavi-observability-bundle/helm]# kubectl get configmap isilon-config-params -n [CSI_DRIVER_NAMESPACE] -o yaml | sed 's/namespace: [CSI_DRIVER_NAMESPACE]/namespace: [CSM_NAMESPACE]/' | kubectl create -f - + ``` + + ``` + [user@anothersystem /home/user/offline-karavi-observability-bundle/helm]# kubectl get secret karavi-authorization-config proxy-server-root-certificate proxy-authz-tokens -n [CSI_DRIVER_NAMESPACE] -o yaml | sed 's/namespace: [CSI_DRIVER_NAMESPACE]/namespace: [CSM_NAMESPACE]/' | sed 's/name: karavi-authorization-config/name: isilon-karavi-authorization-config/' | sed 's/name: proxy-server-root-certificate/name: isilon-proxy-server-root-certificate/' | sed 's/name: proxy-authz-tokens/name: isilon-proxy-authz-tokens/' | kubectl create -f - + ``` + +4. Now that the required images have been made available and the Helm chart's configuration updated with references to the internal registry location, installation can proceed by following the instructions that are documented within the Helm chart's repository. **Note:** - Optionally, you could provide your own [configurations](../helm/#configuration). A sample values.yaml file is located [here](https://github.com/dell/helm-charts/blob/main/charts/karavi-observability/values.yaml). - The default `values.yaml` is configured to deploy the CSM for Observability Topology service on install. - If CSM for Authorization is enabled for CSI PowerFlex, the `karaviMetricsPowerflex.authorization` parameters must be properly configured. + - If CSM for Authorization is enabled for CSI PowerScale, the `karaviMetricsPowerscale.authorization` parameters must be properly configured. ``` [user@anothersystem /home/user/offline-karavi-observability-bundle/helm]# helm install -n install-namespace app-name karavi-observability diff --git a/content/docs/observability/deployment/online.md b/content/docs/observability/deployment/online.md index 60e83ef3a9..82524a658c 100644 --- a/content/docs/observability/deployment/online.md +++ b/content/docs/observability/deployment/online.md @@ -69,6 +69,8 @@ Options: --namespace[=] Namespace where Karavi Observability will be installed Optional --csi-powerflex-namespace[=] Namespace where CSI PowerFlex is installed, default is 'vxflexos' + --csi-powerstore-namespace[=] Namespace where CSI PowerStore is installed, default is 'csi-powerstore' + --csi-powerscale-namespace[=] Namespace where CSI PowerScale is installed, default is 'isilon' --set-file Set values from files used during helm installation (can be specified multiple times) --skip-verify Skip verification of the environment --values[=] Values file, which defines configuration values @@ -77,7 +79,7 @@ Options: --help Help ``` -__Note:__ CSM for Authorization currently does not support the Observability module for PowerStore. Therefore setting `enable-authorization` is not supported in this case. +__Note:__ CSM for Authorization currently does not support the Observability module for PowerStore. Therefore setting `enable-authorization` is not supported in this case. ### Executing the Installer @@ -101,6 +103,7 @@ To perform an online installation of CSM for Observability, the following steps __Note:__ - The default `values.yaml` is configured to deploy the CSM for Observability Topology service on install. - If CSM for Authorization is enabled for CSI PowerFlex, the `karaviMetricsPowerflex.authorization` parameters must be properly configured in `myvalues.yaml` for CSM Observability. + - If CSM for Authorization is enabled for CSI PowerScale, the `karaviMetricsPowerscale.authorization` parameters must be properly configured in `myvalues.yaml` for CSM Observability. ``` [user@system /home/user/karavi-observability/installer]# ./karavi-observability-install.sh install --namespace [CSM_NAMESPACE] --values myvalues.yaml diff --git a/content/docs/observability/design/_index.md b/content/docs/observability/design/_index.md index e6dc2b93c1..adb56abcb8 100644 --- a/content/docs/observability/design/_index.md +++ b/content/docs/observability/design/_index.md @@ -19,7 +19,10 @@ The following prerequisites must be deployed into the namespace where CSM for Ob - Prometheus for scraping the metrics from the OTEL collector. - Grafana for visualizing the metrics from Prometheus and Topology services using custom dashboards. -- CSM for Observability will use secrets to get details about the storage systems used by the CSI drivers. These secrets should be copied from the namespaces where the drivers are deployed. CSI Powerflex driver uses the 'vxflexos-config' secret and CSI PowerStore uses the 'powerstore-config' secret. +- CSM for Observability will use secrets to get details about the storage systems used by the CSI drivers. These secrets should be copied from the namespaces where the drivers are deployed. + - CSI PowerFlex driver uses the 'vxflexos-config' secret. + - CSI PowerStore driver uses the 'powerstore-config' secret. + - CSI PowerScale driver uses the 'isilon-creds' secret. ## Deployment Architectures diff --git a/content/docs/observability/metrics/powerscale.md b/content/docs/observability/metrics/powerscale.md new file mode 100644 index 0000000000..d06d168902 --- /dev/null +++ b/content/docs/observability/metrics/powerscale.md @@ -0,0 +1,45 @@ +--- +title: PowerScale Metrics +linktitle: PowerScale Metrics +weight: 1 +description: > + Dell Container Storage Modules (CSM) for Observability PowerScale Metrics +--- + +This section outlines the metrics collected by the Container Storage Modules (CSM) Observability module for PowerScale. The [Grafana reference dashboards](https://github.com/dell/karavi-observability/blob/main/grafana/dashboards/powerscale) for PowerScale metrics can be uploaded to your Grafana instance. + +## I/O Performance Metrics + +Storage system I/O performance metrics (IOPS, bandwidth) are available by default and broken down by cluster and quota. + +To disable these metrics, set the ```performanceMetricsEnabled``` field to false in helm/values.yaml. + +The following I/O performance metrics are available from the OpenTelemetry collector endpoint. Please see the [CSM for Observability](../../) for more information on deploying and configuring the OpenTelemetry collector. + +| Metric | Description | +|--------------------------------------------------------------------|-------------------------------------------------------------------------------------| +| powerscale_cluster_cpu_use_rate | Average CPU usage for all nodes in the monitored cluster | +| powerscale_cluster_disk_read_operation_rate | Average rate at which the disks in the cluster servicing data read change requests | +| powerscale_cluster_disk_write_operation_rate | Average rate at which the disks in the cluster servicing data write change requests | +| powerscale_cluster_disk_throughput_read_rate_megabytes_per_second | Throughput rate of data being read from the disks in the cluster | +| powerscale_cluster_disk_throughput_write_rate_megabytes_per_second | Throughput rate of data being written to the disks in the cluster | + +## Storage Capacity Metrics + +Provides visibility into the total, used, and available capacity for PowerScale cluster and quotas. + +To disable these metrics, set the ```capacityMetricsEnabled``` field to false in helm/values.yaml. + +The following storage capacity metrics are available from the OpenTelemetry collector endpoint. Please see the [CSM for Observability](../../) for more information on deploying and configuring the OpenTelemetry collector. + +| Metric | Description | +|---------------------------------------------------|------------------------------------------------------------------| +| powerscale_cluster_total_capacity_terabytes | Total cluster capacity (TB) | +| powerscale_cluster_remaining_capacity_terabytes | Total unused cluster capacity (TB) | +| powerscale_cluster_used_capacity_percentage | Percent of total cluster capacity that has been used | +| powerscale_cluster_total_hard_quota_gigabytes | Amount of total capacity allocated in all directory hard quotas | +| powerscale_cluster_total_hard_quota_percentage | Percent of total capacity allocated in all directory hard quotas | +| powerscale_volume_quota_subscribed_gigabytes | Space used of Quota for a directory (GB) | +| powerscale_volume_hard_quota_remaining_gigabytes | Unused spaced below the hard limit for a directory (GB) | +| powerscale_volume_quota_subscribed_percentage | Percentage of space used in hard limit for a directory | +| powerscale_volume_hard_quota_remaining_percentage | Percentage of the remaining space in hard limit for a directory | diff --git a/content/docs/observability/release/_index.md b/content/docs/observability/release/_index.md index 84a9c87ea2..07f248dc73 100644 --- a/content/docs/observability/release/_index.md +++ b/content/docs/observability/release/_index.md @@ -6,14 +6,15 @@ Description: > Dell Container Storage Modules (CSM) release notes for observability --- -## Release Notes - CSM Observability 1.2.0 +## Release Notes - CSM Observability 1.3.0 ### New Features/Changes +- [Support PowerScale in CSM Observability](https://github.com/dell/csm/issues/452) +- [Set PV/PVC's namespace when using Observability Module](https://github.com/dell/csm/issues/453) +- [CSM Observability modules stick with otel controller 0.42.0](https://github.com/dell/csm/issues/454) ### Fixed Issues -- [PowerStore Grafana dashboard does not populate correctly ](https://github.com/dell/csm/issues/279) -- [Grafana installation script - prometheus address is incorrect](https://github.com/dell/csm/issues/278) -- [prometheus-values.yaml error in json](https://github.com/dell/csm/issues/259) +- [Observability Topology: nil pointer error](https://github.com/dell/csm/issues/430) ### Known Issues \ No newline at end of file diff --git a/content/docs/observability/troubleshooting/_index.md b/content/docs/observability/troubleshooting/_index.md index 4c094c212d..7a5fbac6d7 100644 --- a/content/docs/observability/troubleshooting/_index.md +++ b/content/docs/observability/troubleshooting/_index.md @@ -171,7 +171,7 @@ sidecar: enabled: true ## Additional grafana server ConfigMap mounts -## Defines additional mounts with ConfigMap. CofigMap must be manually created in the namespace. +## Defines additional mounts with ConfigMap. ConfigMap must be manually created in the namespace. extraConfigmapMounts: [] ``` diff --git a/content/docs/observability/upgrade/_index.md b/content/docs/observability/upgrade/_index.md index a44d38c615..932c107e02 100644 --- a/content/docs/observability/upgrade/_index.md +++ b/content/docs/observability/upgrade/_index.md @@ -55,7 +55,7 @@ CSM for Observability online installer upgrade can be used if the initial deploy ``` 2. Update `values.yaml` file as needed. Configuration options are outlined in the [Helm chart deployment section](../deployment/helm#configuration). -2. Execute the `./karavi-observability-install.sh` script: +3. Execute the `./karavi-observability-install.sh` script: ``` [user@system /home/user/karavi-observability/installer]# ./karavi-observability-install.sh upgrade --namespace $namespace --values myvalues.yaml --version $latest_chart_version --------------------------------------------------------------------------------- @@ -80,3 +80,42 @@ CSM for Observability online installer upgrade can be used if the initial deploy | |- Waiting for pods in namespace karavi to be ready Success ``` + +## Offline Installer Upgrade + +Assuming that you have already installed the Karavi Observability Helm Chart by offline installer and meet its installation requirement. +These instructions can be followed when a Helm chart was installed and will be upgraded in an environment that does not have an internet connection and will be unable to download the Helm chart and related Docker images. + +1. Build the Offline Bundle + Follow [Offline Karavi Observability Helm Chart Installer](../deployment/offline) to build the latest bundle. + +2. Unpack the Offline Bundle + Follow [Offline Karavi Observability Helm Chart Installer](../deployment/offline), copy and unpack the Offline Bundle to another Linux system, and push Docker images to the internal Docker registry. + +3. Perform Helm upgrade + 1. Change directory to `helm` which contains the updated Helm chart directory: + ``` + [user@anothersystem /home/user/offline-karavi-observability-bundle]# cd helm + ``` + 2. Install necessary cert-manager CustomResourceDefinitions provided. + ``` + [user@anothersystem /home/user/offline-karavi-observability-bundle/helm]# kubectl apply --validate=false -f cert-manager.crds.yaml + ``` + 3. (Optional) Enable Karavi Observability for PowerFlex/PowerScale to use an existing instance of Karavi Authorization for accessing the REST API for the given storage systems. + **Note**: Assuming that if the Karavi Observability's Authorization has been enabled in the phase of [Offline Karavi Observability Helm Chart Installer](../deployment/offline), the Authorization Secrets/Configmap have been copied to the Karavi Observability namespace. + A sample configuration values.yaml file is located [here](https://github.com/dell/helm-charts/blob/main/charts/karavi-observability/values.yaml). + In your own configuration values.yaml, you need to enable PowerFlex/PowerScale Authorization, and provide the location of the sidecar-proxy Docker image and URL of the Karavi Authorization proxyHost address. + + 4. Now that the required images have been made available and the Helm chart's configuration updated with references to the internal registry location, installation can proceed by following the instructions that are documented within the Helm chart's repository. + **Note**: Assuming that Your Secrets from CSI Drivers have been copied to the Karavi Observability namespace in the phase of [Offline Karavi Observability Helm Chart Installer](../deployment/offline) + Optionally, you could provide your own [configurations](../deployment/helm/#configuration). A sample values.yaml file is located [here](https://github.com/dell/helm-charts/blob/main/charts/karavi-observability/values.yaml). + ``` + [user@anothersystem /home/user/offline-karavi-observability-bundle/helm]# helm upgrade -n install-namespace app-name karavi-observability + NAME: app-name + LAST DEPLOYED: Wed Aug 17 14:44:04 2022 + NAMESPACE: install-namespace + STATUS: deployed + REVISION: 1 + TEST SUITE: None + ``` + \ No newline at end of file diff --git a/content/docs/references/_index.md b/content/docs/references/_index.md index 28cae60329..ce3be78438 100644 --- a/content/docs/references/_index.md +++ b/content/docs/references/_index.md @@ -1,7 +1,7 @@ --- title: "References" linkTitle: "References" -weight: 13 +weight: 14 Description: > Dell Technologies (Dell) Container Storage Modules (CSM) References --- diff --git a/content/docs/references/cli/_index.md b/content/docs/references/cli/_index.md new file mode 100644 index 0000000000..e99a6775da --- /dev/null +++ b/content/docs/references/cli/_index.md @@ -0,0 +1,534 @@ +--- +title: "CLI" +linkTitle: "CLI" +weight: 1 +Description: > + CLI for Dell Container Storage Modules (CSM) +--- +dellctl is a common command line interface(CLI) used to interact with and manage your [Container Storage Modules](https://github.com/dell/csm) (CSM) resources. +This document outlines all dellctl commands, their intended use, options that can be provided to alter their execution, and expected output from those commands. + +| Command | Description | +| - | - | +| [dellctl](#dellctl) | dellctl is used to interact with Container Storage Modules | +| [dellctl cluster](#dellctl-cluster) | Manipulate one or more k8s cluster configurations | +| [dellctl cluster add](#dellctl-cluster-add) | Add a k8s cluster to be managed by dellctl | +| [dellctl cluster remove](#dellctl-cluster-remove) | Removes a k8s cluster managed by dellctl | +| [dellctl cluster get](#dellctl-cluster-get) | List all clusters currently being managed by dellctl | +| [dellctl backup](#dellctl-backup) | Allows to manipulate application backups/clones | +| [dellctl backup create](#dellctl-backup-create) | Create an application backup/clones | +| [dellctl backup delete](#dellctl-backup-delete) | Delete application backups | +| [dellctl backup get](#dellctl-backup-get) | Get application backups | +| [dellctl restore](#dellctl-restore) | Allows to manipulate application restores | +| [dellctl restore create](#dellctl-restore-create) | Restore an application backup | +| [dellctl restore delete](#dellctl-restore-delete) | Delete application restores | +| [dellctl restore get](#dellctl-restore-get) | Get application restores | + + +## Installation instructions +1. Download `dellctl` from [here](https://github.com/dell/csm/releases/tag/v1.4.0). +2. chmod +x dellctl +3. Move `dellctl` to `/usr/local/bin` or add `dellctl`'s containing directory path to PATH environment variable. +4. Run `dellctl --help` to know available commands or run `dellctl command --help` to know more about a specific command. + +By default, the `dellctl` runs against local cluster(referenced by `KUBECONFIG` environment variable or by a kube config file present at default location). +The user can register one or more remote clusters for `dellctl`, and run any `dellctl` command against these clusters by specifying the registered cluster id to the command. + + +## General Commands + +### dellctl + +dellctl is a CLI tool for managing Dell Container Storage Resources. + +##### Flags + +``` + -h, --help help for dellctl + -v, --version version for dellctl +``` + +##### Output + +Outputs help text + + + +--- + + + +### dellctl cluster + +Allows to manipulate one or more k8s cluster configurations + +##### Available Commands + +``` + add Adds a k8s cluster to be managed by dellctl + remove Removes a k8s cluster managed by dellctl + get List all clusters currently being managed by dellctl +``` + +##### Flags + +``` + -h, --help help for cluster +``` + +##### Output + +Outputs help text + + + +--- + + + +### dellctl cluster add + +Add one or more k8s clusters to be managed by dellctl + +##### Flags + +``` +Flags: + -n, --names strings cluster names + -f, --files strings paths for kube config files + -u, --uids strings uids of the kube-system namespaces in the clusters + --force forcefully add cluster + -h, --help help for add +``` + +##### Output + +``` +# dellctl cluster add -n cluster1 -f ~/kubeconfigs/cluster1-kubeconfig + INFO Adding clusters ... + INFO Cluster: cluster1 + INFO Successfully added cluster cluster1 in /root/.dellctl/clusters/cluster1 folder. +``` + +Add a cluster with it's uid + +``` +# dellctl cluster add -n cluster2 -f ~/kubeconfigs/cluster2-kubeconfig -u "035133aa-5b65-4080-a813-34a7abe48180" + INFO Adding clusters ... + INFO Cluster: cluster2 + INFO Successfully added cluster cluster2 in /root/.dellctl/clusters/cluster2 folder. +``` + + + +--- + + + +### dellctl cluster remove + +Removes a k8s cluster by name from the list of clusters being managed by dellctl + +##### Aliases + +``` + remove, rm +``` + +##### Flags + +``` + -h, --help help for remove + -n, --name string cluster name +``` + +##### Output + +``` +# dellctl cluster remove -n cluster1 + INFO Removing cluster with id cluster1 + INFO Removed cluster with id cluster1 +``` + + + +--- + + + +### dellctl cluster get + +List all clusters currently being managed by dellctl + +##### Aliases + +``` + get, ls +``` + +##### Flags + +``` + -h, --help help for get +``` + +##### Output + +``` +# dellctl cluster get +CLUSTER ID VERSION URL UID +cluster1 v1.22 https://1.2.3.4:6443 +cluster2 v1.22 https://1.2.3.5:6443 035133aa-5b65-4080-a813-34a7abe48180 +``` + + + +--- + + + +## Commands related to application mobility operations + +### dellctl backup + +Allows to manipulate application backups/clones + +##### Available Commands + +``` + create Create an application backup/clones + delete Delete application backups + get Get application backups +``` + +##### Flags + +``` + -h, --help help for backup +``` + +##### Output + +Outputs help text + + + +--- + + + +### dellctl backup create + +Create an application backup/clones + +##### Flags + +``` + --cluster-id string Id of the cluster managed by dellctl + --exclude-namespaces stringArray List of namespace names to exclude from the backup. + --include-namespaces stringArray List of namespace names to include in the backup (use '*' for all namespaces). (default *) + --ttl duration Backup retention period. (default 720h0m0s) + --exclude-resources stringArray Resources to exclude from the backup, formatted as resource.group, such as storageclasses.storage.k8s.io. + --include-resources stringArray Resources to include in the backup, formatted as resource.group, such as storageclasses.storage.k8s.io (use '*' for all resources). + --backup-location string Storage location where k8s resources and application data will be backed up to. (default "default") + --data-mover string Data mover to be used to backup application data. (default "Restic") + --include-cluster-resources optionalBool[=true] Include cluster-scoped resources in the backup + -l, --label-selector labelSelector Only backup resources matching this label selector. (default ) + -n, --namespace string The namespace in which application mobility service should operate. (default "app-mobility-system") + --clones stringArray Creates an application clone into target clusters managed by dellctl. Specify optional namespace mappings where the clone is created. Example: 'cluster1/sourceNamespace1:targetNamespace1', 'cluster1/sourceNamespace1:targetNamespace1;cluster2/sourceNamespace2:targetNamespace2' + -h, --help help for create +``` + +##### Output + +Create a backup of the applications running in namespace `demo1` + +``` +# dellctl backup create backup1 --include-namespaces demo1 + INFO Backup request "backup1" submitted successfully. + INFO Run 'dellctl backup get backup1' for more details. +``` + +Create clones of the application running in namespace `demo1`, on clusters with id `cluster1` and `cluster2` + +``` +# dellctl backup create demo-app-clones --include-namespaces demo1 --clones "cluster1/demo1:restore-ns1" --clones "cluster2/demo1:restore-ns1" + INFO Clone request "demo-app-clones" submitted successfully. + INFO Run 'dellctl backup get demo-app-clones' for more details. +``` + +Take backup of application running in namespace `demo3` on remote cluster with id `cluster2` + +``` +# dellctl backup create backup4 --include-namespaces demo3 --cluster-id cluster2 + INFO Backup request "backup4" submitted successfully. + INFO Run 'dellctl backup get backup4' for more details. +``` + + + +--- + + + +### dellctl backup delete + +Delete one or more application backups + +##### Flags + +``` + --all Delete all backups + --cluster-id string Id of the cluster managed by dellctl + --confirm Confirm deletion + -h, --help help for delete + -n, --namespace string The namespace in which application mobility service should operate. (default "app-mobility-system") +``` + +##### Output + +``` +# dellctl backup delete backup1 +Are you sure you want to continue (Y/N)? Y + INFO Request to delete backup "backup1" submitted successfully. + INFO The backup will be fully deleted after all associated data (backup files, pod volume data, restores, velero backup) are removed. +``` + +Delete multiple backups + +``` +# dellctl backup delete backup1 backup2 +Are you sure you want to continue (Y/N)? Y + INFO Request to delete backup "backup1" submitted successfully. + INFO The backup will be fully deleted after all associated data (backup files, pod volume data, restores, velero backup) are removed. + INFO Request to delete backup "backup2" submitted successfully. + INFO The backup will be fully deleted after all associated data (backup files, pod volume data, restores, velero backup) are removed. +``` + + +Delete all backups without asking for user confirmation + +``` +# dellctl backup delete --all --confirm + INFO Request to delete backup "backup4" submitted successfully. + INFO The backup will be fully deleted after all associated data (backup files, pod volume data, restores, velero backup) are removed. + INFO Request to delete backup "demo-app-clones" submitted successfully. + INFO The backup will be fully deleted after all associated data (backup files, pod volume data, restores, velero backup) are removed. +``` + + +--- + + + +### dellctl backup get + +Get application backups + +##### Flags + +``` + --cluster-id string Id of the cluster managed by dellctl + -h, --help help for get + -n, --namespace string The namespace in which application mobility service should operate. (default "app-mobility-system") + +``` + +##### Output + +``` +# dellctl backup get +NAME STATUS CREATED EXPIRES STORAGE LOCATION DATA MOVER CLONED TARGET CLUSTERS +backup1 Completed 2022-07-27 11:51:00 -0400 EDT 2022-08-26 11:51:00 -0400 EDT default Restic false +backup2 Completed 2022-07-27 11:59:24 -0400 EDT 2022-08-26 11:59:42 -0400 EDT default Restic false +backup4 Completed 2022-07-27 12:02:54 -0400 EDT NA default Restic false +demo-app-clones Restored 2022-07-27 11:53:37 -0400 EDT 2022-08-26 11:53:37 -0400 EDT default Restic true cluster1, cluster2 +``` + +Get backups from remote cluster with id `cluster2` + +``` +# dellctl backup get --cluster-id cluster2 +NAME STATUS CREATED EXPIRES STORAGE LOCATION DATA MOVER CLONED TARGET CLUSTERS +backup1 Completed 2022-07-27 11:52:42 -0400 EDT NA default Restic false +backup2 Completed 2022-07-27 12:02:29 -0400 EDT NA default Restic false +backup4 Completed 2022-07-27 12:01:49 -0400 EDT 2022-08-26 12:01:49 -0400 EDT default Restic false +demo-app-clones Completed 2022-07-27 11:54:55 -0400 EDT NA default Restic true cluster1, cluster2 +``` + +Get backups with their names + +``` +# dellctl backup get backup1 demo-app-clones +NAME STATUS CREATED EXPIRES STORAGE LOCATION DATA MOVER CLONED TARGET CLUSTERS +backup1 Completed 2022-07-27 11:51:00 -0400 EDT 2022-08-26 11:51:00 -0400 EDT default Restic false +demo-app-clones Completed 2022-07-27 11:53:37 -0400 EDT 2022-08-26 11:53:37 -0400 EDT default Restic true cluster1, cluster2 +``` + + + +--- + + + +### dellctl restore + +Allows to manipulate application restores + +##### Available Commands + +``` + create Restore an application backup + delete Delete application restores + get Get application restores +``` + +##### Flags + +``` + -h, --help help for restore +``` + +##### Output + +Outputs help text + + + +--- + + + +### dellctl restore create + +Restore an application backup + +##### Flags + +``` + --cluster-id string Id of the cluster managed by dellctl + --from-backup string Backup to restore from + --namespace-mappings mapStringString Map of source namespace names to target namespace names to restore into in the form src1:dst1,src2:dst2,... + --exclude-namespaces stringArray List of namespace names to exclude from the backup. + --include-namespaces stringArray List of namespace names to include in the backup (use '*' for all namespaces). (default *) + --exclude-resources stringArray Resources to exclude from the backup, formatted as resource.group, such as storageclasses.storage.k8s.io. + --include-resources stringArray Resources to include in the backup, formatted as resource.group, such as storageclasses.storage.k8s.io (use '*' for all resources). + --restore-volumes optionalBool[=true] Whether to restore volumes from snapshots. + --include-cluster-resources optionalBool[=true] Include cluster-scoped resources in the backup + -n, --namespace string The namespace in which application mobility service should operate. (default "app-mobility-system") + -h, --help help for create +``` + +##### Output + +Restore application backup `backup1` on local cluster in namespace `restorens1` + +``` +# dellctl restore create restore1 --from-backup backup1 --namespace-mappings "demo1:restorens1" + INFO Restore request "restore1" submitted successfully. + INFO Run 'dellctl restore get restore1' for more details. +``` + +Restore application backup `backup1` on remote cluster `cluster2` in namespace `demo1` + +``` +# dellctl restore create restore1 --from-backup backup1 --cluster-id cluster2 + INFO Restore request "restore1" submitted successfully. + INFO Run 'dellctl restore get restore1' for more details. +``` + + + +--- + + + +### dellctl restore delete + +Delete one or more application restores + +##### Flags + +``` + --all Delete all restores + --cluster-id string Id of the cluster managed by dellctl + --confirm Confirm deletion + -h, --help help for delete + -n, --namespace string The namespace in which application mobility service should operate. (default "app-mobility-system") +``` + +##### Output + +Delete a restore created on remote cluster with id `cluster2` + +``` +# dellctl restore delete restore1 --cluster-id cluster2 +Are you sure you want to continue (Y/N)? Y + INFO Request to delete restore "restore1" submitted successfully. + INFO The restore will be fully deleted after all associated data (restore files, velero restore) are removed. +``` + +Delete multiple restores + +``` +# dellctl restore delete restore1 restore4 +Are you sure you want to continue (Y/N)? Y + INFO Request to delete restore "restore1" submitted successfully. + INFO The restore will be fully deleted after all associated data (restore files, velero restore) are removed. + INFO Request to delete restore "restore4" submitted successfully. + INFO The restore will be fully deleted after all associated data (restore files, velero restore) are removed. +``` + +Delete all restores without asking for user confirmation + +``` +# dellctl restore delete --all --confirm + INFO Request to delete restore "restore1" submitted successfully. + INFO The restore will be fully deleted after all associated data (restore files, velero restore) are removed. + INFO Request to delete restore "restore2" submitted successfully. + INFO The restore will be fully deleted after all associated data (restore files, velero restore) are removed. +``` + + +--- + + + +### dellctl restore get + +Get application restores + +##### Flags + +``` + --cluster-id string Id of the cluster managed by dellctl + -h, --help help for get + -n, --namespace string The namespace in which application mobility service should operate. (default "app-mobility-system") +``` + +##### Output + +Get all the application restores created on local cluster + +``` +# dellctl restore get +NAME BACKUP STATUS CREATED COMPLETED +restore1 backup1 Completed 2022-07-27 12:35:29 -0400 EDT +restore4 backup1 Completed 2022-07-27 12:39:42 -0400 EDT +``` + +Get all the application restores created on remote cluster with id `cluster2` + +``` +# dellctl restore get --cluster-id cluster2 +NAME BACKUP STATUS CREATED COMPLETED +restore1 backup1 Completed 2022-07-27 12:38:43 -0400 EDT +``` + +Get restores with their names + +``` +# dellctl restore get restore1 +NAME BACKUP STATUS CREATED COMPLETED +restore1 backup1 Completed 2022-07-27 12:35:29 -0400 EDT +``` diff --git a/content/docs/release/_index.md b/content/docs/release/_index.md index 97a5c32dc9..ffb7d086c3 100644 --- a/content/docs/release/_index.md +++ b/content/docs/release/_index.md @@ -1,7 +1,7 @@ --- title: "Release notes" linkTitle: "Release notes" -weight: 10 +weight: 12 Description: > Dell Container Storage Modules (CSM) release notes --- @@ -16,4 +16,8 @@ Release notes for Container Storage Modules: [CSM for Replication](../replication/release) -[CSM for Resiliency](../resiliency/release) \ No newline at end of file +[CSM for Resiliency](../resiliency/release) + +[CSM for Encryption](../secure/encryption/release) + +[CSM for Application Mobility](../applicationmobility/release) diff --git a/content/docs/replication/_index.md b/content/docs/replication/_index.md index df4d1bb45c..d630c2e89a 100644 --- a/content/docs/replication/_index.md +++ b/content/docs/replication/_index.md @@ -22,6 +22,7 @@ CSM for Replication provides the following capabilities: | Create `PersistentVolume` objects in the cluster representing the replicated volume | yes | yes | yes | no | no | | Create `DellCSIReplicationGroup` objects in the cluster | yes | yes | yes | no | no | | Failover & Reprotect applications using the replicated volumes | yes | yes | yes | no | no | +| Online Volume Expansion for replicated volumes | yes | no | no | no | no | | Provides a command line utility - [repctl](tools) for configuring & managing replication related resources across multiple clusters | yes | yes | yes | no | no | {{}} @@ -43,7 +44,7 @@ CSM for Replication provides the following capabilities: {{}} | | PowerMax | PowerStore | PowerScale | |---------------|:-------------------:|:----------------:|:----------------:| -| Storage Array | 5978.479.479, 5978.711.711, Unisphere 9.2 | 1.0.x, 2.0.x, 2.1.x | OneFS 8.1, 8.2, 9.0, 9.1, 9.2, 9.3 | +| Storage Array | 5978.479.479, 5978.711.711, 6079.xxx.xxx, Unisphere 10.0 | 1.0.x, 2.0.x, 2.1.x | OneFS 8.1, 8.2, 9.0, 9.1, 9.2, 9.3 | {{
}} ## Supported CSI Drivers diff --git a/content/docs/replication/deployment/installation.md b/content/docs/replication/deployment/installation.md index 005637fac7..6bbabeee29 100644 --- a/content/docs/replication/deployment/installation.md +++ b/content/docs/replication/deployment/installation.md @@ -75,8 +75,9 @@ The following CSI drivers support replication: 1. CSI driver for PowerMax 2. CSI driver for PowerStore 3. CSI driver for PowerScale +4. CSI driver for Unity XT -Please follow the steps outlined in [PowerMax](../powermax), [PowerStore](../powerstore) or [PowerScale](../powerscale) pages during the driver installation. +Please follow the steps outlined in [PowerMax](../powermax), [PowerStore](../powerstore), [PowerScale](../powerscale) or [Unity](../unity) pages during the driver installation. >Note: Please ensure that replication CRDs are installed in the clusters where you are installing the CSI drivers. These CRDs are generally installed as part of the CSM Replication controller installation process. diff --git a/content/docs/replication/deployment/powermax.md b/content/docs/replication/deployment/powermax.md index 2d9fca7e0a..06dc2ec149 100644 --- a/content/docs/replication/deployment/powermax.md +++ b/content/docs/replication/deployment/powermax.md @@ -22,11 +22,22 @@ While using any SRDF groups, ensure that they are for exclusive use by the CSI P * If an SRDF group is already in use by a CSI driver, don't use it for provisioning replicated volumes outside CSI provisioning workflows. There are some important limitations that apply to how CSI PowerMax driver uses SRDF groups - -* One replicated storage group __always__ contains volumes provisioned from a single namespace -* While using SRDF mode Async/Metro, a single SRDF group can be used to provision volumes within a single namespace. You can still create multiple storage classes using the same SRDF group for different Service Levels. +* One replicated storage group using Async/Sync __always__ contains volumes provisioned from a single namespace. +* While using SRDF mode Async, a single SRDF group can be used to provision volumes within a single namespace. You can still create multiple storage classes using the same SRDF group for different Service Levels. But all these storage classes will be restricted to provisioning volumes within a single namespace. -* When using SRDF mode Sync, a single SRDF group can be used to provision volumes from multiple namespaces. - +* When using SRDF mode Sync/Metro, a single SRDF group can be used to provision volumes from multiple namespaces. + +#### Automatic creation of SRDF Groups +CSI Driver for Powermax supports automatic creation of SRDF Groups starting **v2.4.0** with help of **10.0** REST endpoints. +To use this feature: +* Remove _replication.storage.dell.com/RemoteRDFGroup_ and _replication.storage.dell.com/RDFGroup_ params from the storage classes before creating first replicated volume. +* Driver will check next available RDF pair and use them to create volumes. +* This enables customers to use same storage class across namespace to create volume. + +Limitation of Auto SRDFG: +* For Async mode, this feature is supported for namespaces with at most 7 characters. +* RDF label used to map namespace with the RDF group has limit of 10 char. 3 char is used for cluster prefix to make RDFG unique across clusters. +* For namespace with more than 7 char, use manual entry of RDF groups in storage class. #### In Kubernetes Ensure you installed CRDs and replication controller in your clusters. @@ -105,8 +116,8 @@ parameters: replication.storage.dell.com/RemoteServiceLevel: replication.storage.dell.com/RdfMode: replication.storage.dell.com/Bias: "false" - replication.storage.dell.com/RdfGroup: - replication.storage.dell.com/RemoteRDFGroup: + replication.storage.dell.com/RdfGroup: # optional + replication.storage.dell.com/RemoteRDFGroup: # optional replication.storage.dell.com/remoteStorageClassName: replication.storage.dell.com/remoteClusterID: ``` @@ -123,8 +134,8 @@ Let's go through each parameter and what it means: METRO, driver does not need `RemoteStorageClassName` and `RemoteClusterID` as it supports METRO with single cluster configuration. * `replication.storage.dell.com/Bias` when the RdfMode is set to METRO, this parameter is required to indicate driver to use Bias or Witness. If set to true, the driver will configure METRO with Bias, if set to false, the driver will configure METRO with Witness. -* `replication.storage.dell.com/RdfGroup` is the local SRDF group number, as configured. -* `replication.storage.dell.com/RemoteRDFGroup` is the remote SRDF group number, as configured. +* `replication.storage.dell.com/RdfGroup` is the local SRDF group number, as configured. It is optional for using Auto SRDF group by driver. +* `replication.storage.dell.com/RemoteRDFGroup` is the remote SRDF group number, as configured. It is optional for using Auto SRDF group by driver. Let's follow up that with an example, let's assume we have two Kubernetes clusters and two PowerMax storage arrays: diff --git a/content/docs/replication/deployment/storageclasses.md b/content/docs/replication/deployment/storageclasses.md index df85a44833..042d351d72 100644 --- a/content/docs/replication/deployment/storageclasses.md +++ b/content/docs/replication/deployment/storageclasses.md @@ -29,7 +29,7 @@ This should contain the name of the storage class on the remote cluster which is >Note: You still need to create a pair of storage classes even while using a single stretched cluster ### Driver specific parameters -Please refer to the driver specific sections for [PowerMax](../powermax/#creating-storage-classes), [PowerStore](../powerstore/#creating-storage-classes) or [PowerScale](../powerscale/#creating-storage-classes) for a detailed list of parameters. +Please refer to the driver specific sections for [PowerMax](../powermax/#creating-storage-classes), [PowerStore](../powerstore/#creating-storage-classes), [PowerScale](../powerscale/#creating-storage-classes) or [Unity](../unity/#creating-storage-classes) for a detailed list of parameters. ### PV sync Deletion diff --git a/content/docs/replication/deployment/unity.md b/content/docs/replication/deployment/unity.md new file mode 100644 index 0000000000..cab4a068fe --- /dev/null +++ b/content/docs/replication/deployment/unity.md @@ -0,0 +1,178 @@ +--- +title: Unity +linktitle: Unity +weight: 7 +description: > + Enabling Replication feature for CSI Unity +--- +## Enabling Replication in CSI Unity + +Container Storage Modules (CSM) Replication sidecar is a helper container that is installed alongside a CSI driver to facilitate replication functionality. Such CSI drivers must implement `dell-csi-extensions` calls. + +CSI driver for Dell Unity supports necessary extension calls from `dell-csi-extensions`. To be able to provision replicated volumes you would need to do the steps described in these sections. + +### Before Installation + +#### On Storage Array +Be sure to configure replication between multiple Unity instances using instructions provided by +Unity storage. + + +#### In Kubernetes +Ensure you installed CRDs and replication controller in your clusters. + +To verify you have everything in order you can execute these commands: + +* Check controller pods + ```shell + kubectl get pods -n dell-replication-controller + ``` + Pods should be `READY` and `RUNNING` +* Check that controller config map is properly populated + ```shell + kubectl get cm -n dell-replication-controller dell-replication-controller-config -o yaml + ``` + `data` field should be properly populated with cluster-id of your choosing and, if using multi-cluster + installation, your `targets:` parameter should be populated by a list of target clusters IDs. + + +If you don't have something installed or something is out-of-place, please refer to installation instructions in [installation-repctl](../install-repctl) or [installation](../installation). + +### Installing Driver With Replication Module + +To install the driver with replication enabled, you need to ensure you have set +helm parameter `controller.replication.enabled` in your copy of example `values.yaml` file +(usually called `my-unity-settings.yaml`, `myvalues.yaml` etc.). + +Here is an example of what that would look like: +```yaml +... +# controller: configure controller specific parameters +controller: + ... + # replication: allows to configure replication + replication: + enabled: true + image: dellemc/dell-csi-replicator:v1.2.0 + replicationContextPrefix: "unity" + replicationPrefix: "replication.storage.dell.com" +... +``` +You can leave other parameters like `image`, `replicationContextPrefix`, and `replicationPrefix` as they are. + +After enabling the replication module, you can continue to install the CSI driver for Unity following the usual installation procedure. Just ensure you've added the necessary array connection information to secret. + +> **_NOTE:_** you need to install your driver on ALL clusters where you want to use replication. Both arrays must be accessible from each cluster. + + +### Creating Storage Classes + +To provision replicated volumes, you need to create adequately configured storage classes on both the source and target clusters. + +A pair of storage classes on the source, and target clusters would be essentially `mirrored` copies of one another. +You can create them manually or with the help of `repctl`. + +#### Manual Storage Class Creation + +You can find a sample replication enabled storage class in the driver repository [here](https://github.com/dell/csi-unity/blob/main/samples/storageclass/unity-replication.yaml). + +It will look like this: +```yaml +apiVersion: storage.k8s.io/v1 +kind: StorageClass +metadata: + name: unity-replication +provisioner: csi-unity.dellemc.com +reclaimPolicy: Delete +volumeBindingMode: Immediate +parameters: + replication.storage.dell.com/isReplicationEnabled: "true" + replication.storage.dell.com/remoteStorageClassName: "unity-replication" + replication.storage.dell.com/remoteClusterID: "target" + replication.storage.dell.com/remoteSystem: "APM000000002" + replication.storage.dell.com/rpo: "5" + replication.storage.dell.com/ignoreNamespaces: "false" + replication.storage.dell.com/volumeGroupPrefix: "csi" + replication.storage.dell.com/remoteStoragePool: pool_002 + replication.storage.dell.com/remoteNasServer: nas_124 + arrayId: "APM000000001" + protocol: "NFS" + storagePool: pool_001 + nasServer: nas_123 +``` + +Let's go through each parameter and what it means: +* `replication.storage.dell.com/isReplicationEnabled` if set to `true`, will mark this storage class as replication enabled, + just leave it as `true`. +* `replication.storage.dell.com/remoteStorageClassName` points to the name of the remote storage class. If you are using replication with the multi-cluster configuration you can make it the same as the current storage class name. +* `replication.storage.dell.com/remoteClusterID` represents the ID of a remote cluster. It is the same id you put in the replication controller config map. +* `replication.storage.dell.com/remoteSystem` is the name of the remote system that should match whatever `clusterName` you called it in `unity-creds` secret. +* `replication.storage.dell.com/rpo` is an acceptable amount of data, which is measured in units of time, that may be lost due to a failure. +* `replication.storage.dell.com/ignoreNamespaces`, if set to `true` Unity driver, it will ignore in what namespace volumes are created and put every volume created using this storage class into a single volume group. +* `replication.storage.dell.com/volumeGroupPrefix` represents what string would be appended to the volume group name to differentiate them. +* `arrayId` is a unique identifier of the storage array you specified in array connection secret. +* `nasServer` id of the Nas server of local array to which the allocated volume will belong. +* `storagePool` is the storage pool of the local array. + +After figuring out how storage classes would look, you just need to go and apply them to your Kubernetes clusters with `kubectl`. + +#### Storage Class creation with `repctl` + +`repctl` can simplify storage class creation by creating a pair of mirrored storage classes in both clusters +(using a single storage class configuration) in one command. + +To create storage classes with `repctl` you need to fill up the config with necessary information. +You can find an example [here](https://github.com/dell/csm-replication/blob/main/repctl/examples/unity_example_values.yaml), copy it, and modify it to your needs. + +If you open this example you can see a lot of similar fields and parameters you can modify in the storage class. + +Let's use the same example from manual installation and see what config would look like: +```yaml +targetClusterID: "cluster-2" +sourceClusterID: "cluster-1" +name: "unity-replication" +driver: "unity" +reclaimPolicy: "Retain" +replicationPrefix: "replication.storage.dell.com" +remoteRetentionPolicy: + RG: "Retain" + PV: "Retain" +parameters: + arrayId: + source: "APM000000001" + target: "APM000000002" + storagePool: + source: pool_123 + target: pool_124 + rpo: "0" + ignoreNamespaces: "false" + volumeGroupPrefix: "prefix" + protocol: "NFS" + nasServer: + source: nas_123 + target: nas_123 +``` + +After preparing the config, you can apply it to both clusters with `repctl`. Before you do this, ensure you've added your clusters to `repctl` via the `add` command. + +To create storage classes just run `./repctl create sc --from-config ` and storage classes would be applied to both clusters. + +After creating storage classes you can make sure they are in place by using `./repctl get storageclasses` command. + +### Provisioning Replicated Volumes + +After installing the driver and creating storage classes, you are good to create volumes using newly +created storage classes. + +On your source cluster, create a PersistentVolumeClaim using one of the replication-enabled Storage Classes. +The CSI Unity driver will create a volume on the array, add it to a VolumeGroup and configure replication +using the parameters provided in the replication enabled Storage Class. + +### Supported Replication Actions +The CSI Unity driver supports the following list of replication actions: +- FAILOVER_REMOTE +- UNPLANNED_FAILOVER_LOCAL +- REPROTECT_LOCAL +- SUSPEND +- RESUME +- SYNC diff --git a/content/docs/replication/high-availability.md b/content/docs/replication/high-availability.md index 1f2d9b7fe2..3f4aacf5d6 100644 --- a/content/docs/replication/high-availability.md +++ b/content/docs/replication/high-availability.md @@ -37,9 +37,9 @@ parameters: SYMID: '000000000001' ServiceLevel: 'Bronze' replication.storage.dell.com/IsReplicationEnabled: 'true' - replication.storage.dell.com/RdfGroup: '7' + replication.storage.dell.com/RdfGroup: '7' # Optional for Auto SRDF group replication.storage.dell.com/RdfMode: 'METRO' - replication.storage.dell.com/RemoteRDFGroup: '7' + replication.storage.dell.com/RemoteRDFGroup: '7' # Optional for Auto SRDF group replication.storage.dell.com/RemoteSYMID: '000000000002' replication.storage.dell.com/RemoteServiceLevel: 'Bronze' reclaimPolicy: Delete diff --git a/content/docs/replication/replication-actions.md b/content/docs/replication/replication-actions.md index fa9502265c..96eece95f8 100644 --- a/content/docs/replication/replication-actions.md +++ b/content/docs/replication/replication-actions.md @@ -34,11 +34,11 @@ For e.g. - The following table lists details of what actions should be used in different Disaster Recovery workflows & the equivalent operation done on the storage array: {{}} -| Workflow | Actions | PowerMax | PowerStore | PowerScale | -| ------------------- | ----------------------------------- | --------------------- | -------------------------------------- | ---------------------------------------------- | -| Planned Migration | FAILOVER_LOCAL
FAILOVER_REMOTE | symrdf failover -swap | FAILOVER (no REPROTECT after FAILOVER) | allow_writes on target, disable local policy | -| Reprotect | REPROTECT_LOCAL
REPROTECT_REMOTE | symrdf resume/est | REPROTECT | enable local policy, disallow_writes on remote | -| Unplanned Migration | UNPLANNED_FAILOVER_LOCAL
UNPLANNED_FAILOVER_REMOTE | symrdf failover -force | FAILOVER (at target site) | break association on target | +| Workflow | Actions | PowerMax | PowerStore | PowerScale | Unity | +| ------------------- | ----------------------------------- | --------------------- | -------------------------------------- | ---------------------------------------------- |---------------------------------------| +| Planned Migration | FAILOVER_LOCAL
FAILOVER_REMOTE | symrdf failover -swap | FAILOVER (no REPROTECT after FAILOVER) | allow_writes on target, disable local policy | FAILOVER (no REPROTECT after FAILOVER)| +| Reprotect | REPROTECT_LOCAL
REPROTECT_REMOTE | symrdf resume/est | REPROTECT | enable local policy, disallow_writes on remote | REPROTECT | +| Unplanned Migration | UNPLANNED_FAILOVER_LOCAL
UNPLANNED_FAILOVER_REMOTE | symrdf failover -force | FAILOVER (at target site) | break association on target | FAILOVER (at target site) | {{
}} ### Maintenance Actions @@ -46,11 +46,11 @@ These actions can be run at any site and are used to change the replication link The following table lists the supported maintenance actions and the equivalent operation done on the storage arrays {{}} -| Action | Description | PowerMax | PowerStore | PowerScale | -|-----------|--------------------------------------|----------------|------------|----------------------| -| SUSPEND | Temporarily suspend
replication | symrdf suspend | PAUSE | disable local policy | -| RESUME | Resume replication | symrdf resume | RESUME | enable local policy | -| SYNC | Synchronize all changes
from source to target | symrdf establish | SYNCHRONIZE NOW | start syncIQ job | +| Action | Description | PowerMax | PowerStore | PowerScale | Unity | +|-----------|--------------------------------------|----------------|------------|----------------------|--------| +| SUSPEND | Temporarily suspend
replication | symrdf suspend | PAUSE | disable local policy | PAUSE | +| RESUME | Resume replication | symrdf resume | RESUME | enable local policy | RESUME | +| SYNC | Synchronize all changes
from source to target | symrdf establish | SYNCHRONIZE NOW | start syncIQ job | SYNC | {{
}} ### How to perform actions diff --git a/content/docs/replication/volume_expansion.md b/content/docs/replication/volume_expansion.md new file mode 100644 index 0000000000..464811d519 --- /dev/null +++ b/content/docs/replication/volume_expansion.md @@ -0,0 +1,44 @@ +--- +title: Volume Expansion +linktitle: Volume Expansion +weight: 6 +description: > + Online expansion of replicated volumes +--- + +Starting in v2.4.0, the CSI PowerMax driver supports the expansion of Replicated Persistent Volumes (PVs). This expansion is done online, which is when the PVC is attached to any node. + +## Prerequisites +- To use this feature, enable resizer in values.yaml. +```yaml +resizer: + enabled: true +``` +- To use this feature, the storage class that is used to create the PVC must have the attribute allowVolumeExpansion set to true. + +## Basic Usage + +To resize a PVC, edit the existing PVC spec and set spec.resources.requests.storage to the intended size. For example, if you have a PVC - pmax-pvc-demo of size 5 Gi, then you can resize it to 10 Gi by updating the PVC. + +```yaml +kind: PersistentVolumeClaim +apiVersion: v1 +metadata: + name: pmax-pvc-demo + namespace: test +spec: + accessModes: + - ReadWriteOnce + volumeMode: Filesystem + resources: + requests: + storage: 10Gi #Updated size from 5Gi to 10Gi + storageClassName: powermax-expand-sc +``` +Update remote PVC with expanded size: + +1. Update the remote PVC size with the same size as on local PVC + +2. After sync with remote CSI driver, volume size will be updated to show new size. + +*NOTE*: The Kubernetes Volume Expansion feature can only be used to increase the size of the volume, it cannot be used to shrink a volume. diff --git a/content/docs/resiliency/_index.md b/content/docs/resiliency/_index.md index ab043bc23d..e945bea855 100644 --- a/content/docs/resiliency/_index.md +++ b/content/docs/resiliency/_index.md @@ -144,7 +144,13 @@ pmtu3 podmontest-0 1/1 Running 0 3m6s ... ``` - CSM for Resiliency may also generate events if it is unable to cleanup a pod for some reason. For example, it may not clean up a pod because the pod is still doing I/O to the array. + CSM for Resiliency may also generate events if it is unable to clean up a pod for some reason. For example, it may not clean up a pod because the pod is still doing I/O to the array. + + Similarly, the label selector for csi-powerscale and csi-unity would be as shown respectively. + ``` + labelSelector: {map[podmon.dellemc.com/driver:csi-isilon] + labelSelector: {map[podmon.dellemc.com/driver:csi-unity] + ``` #### Important Before putting an application into production that relies on CSM for Resiliency monitoring, it is important to do a few test failovers first. To do this take the node that is running the pod offline for at least 2-3 minutes. Verify that there is an event message similar to the one above is logged, and that the pod recovers and restarts normally with no loss of data. (Note that if the node is running many CSM for Resiliency protected pods, the node may need to be down longer for CSM for Resiliency to have time to evacuate all the protected pods.) diff --git a/content/docs/resiliency/deployment.md b/content/docs/resiliency/deployment.md index 8a4a20519f..11cda42513 100644 --- a/content/docs/resiliency/deployment.md +++ b/content/docs/resiliency/deployment.md @@ -21,11 +21,10 @@ Configure all the helm chart parameters described below before installing the dr The drivers that support Helm chart installation allow CSM for Resiliency to be _optionally_ installed by variables in the chart. There is a _podmon_ block specified in the _values.yaml_ file of the chart that will look similar to the text below by default: ``` -# Podmon is an optional feature under development and tech preview. # Enable this feature only after contact support for additional information podmon: enabled: true - image: dellemc/podmon:v1.2.0 + image: dellemc/podmon:v1.3.0 controller: args: - "--csisock=unix:/var/run/csi/csi.sock" diff --git a/content/docs/resiliency/release/_index.md b/content/docs/resiliency/release/_index.md index 3beec86748..96d9a62f47 100644 --- a/content/docs/resiliency/release/_index.md +++ b/content/docs/resiliency/release/_index.md @@ -6,16 +6,13 @@ Description: > Dell Container Storage Modules (CSM) release notes for resiliency --- -## Release Notes - CSM Resiliency 1.2.0 +## Release Notes - CSM Resiliency 1.3.0 ### New Features/Changes -- Support for node taint when driver pod is unhealthy. -- Resiliency protection on driver node pods, see [CSI node failure protection](https://github.com/dell/csm/issues/145). -- Resiliency support for CSI Driver for PowerScale, see [CSI Driver for PowerScale](https://github.com/dell/csm/issues/262). ### Fixed Issues -- Occasional failure unmounting Unity volume for raw block devices via iSCSI, see [unmounting Unity volume](https://github.com/dell/csm/issues/237). +- Documentation improvement to identify all requirements when building the service and running unit tests for CSM Authorization and CSM Resiliency repository (https://github.com/dell/karavi-resiliency/pull/131). ### Known Issues \ No newline at end of file diff --git a/content/docs/secure/_index.md b/content/docs/secure/_index.md new file mode 100644 index 0000000000..48031f8877 --- /dev/null +++ b/content/docs/secure/_index.md @@ -0,0 +1,8 @@ +--- +title: "Secure" +linkTitle: "Secure" +weight: 9 +Description: > + Security features for Dell CSI drivers +--- +Secure is a suite of Dell Container Storage Modules (CSM) that brings security related features to Kubernetes users of Dell storage products. diff --git a/content/docs/secure/encryption/_index.md b/content/docs/secure/encryption/_index.md new file mode 100644 index 0000000000..c753da27b1 --- /dev/null +++ b/content/docs/secure/encryption/_index.md @@ -0,0 +1,130 @@ +--- +title: "Encryption" +linkTitle: "Encryption" +weight: 1 +Description: > + CSI Volumes Encryption +--- +Encryption provides the capability to encrypt user data residing on volumes created by Dell CSI Drivers. + +> **NOTE:** This tech-preview release is not intended for use in production environment. + +> **NOTE:** Encryption requires a time-based license to create new encrypted volumes. Request a [trial license](../../license) prior to deployment. +> +> After the license expiration, existing encrypted volume can still be unlocked and used, but no new encrypted volumes can be created. + +The volume data is encrypted on the Kubernetes worker host running the application workload, transparently for the application. + +Under the hood, *gocryptfs*, an open-source FUSE based encryptor, is used to encrypt both files content and the names of files and directories. + +Files content is encrypted using AES-256-GCM and names are encrypted using AES-256-EME. + +*gocryptfs* needs a password to initialize and to unlock the encrypted file system. +Encryption generates 32 random bytes for the password and stores them in Hashicorp Vault. + +For detailed information on the cryptography behind gocryptfs, see [gocryptfs Cryptography](https://nuetzlich.net/gocryptfs/forward_mode_crypto). + +When a CSI Driver is installed with the Encryption feature enabled, two provisioners are registered in the cluster: + +#### Provisioner for unencrypted volumes + +This provisioner belongs to the storage driver and does not depend on the Encryption feature. Use a storage class with this provisioner to create regular unencrypted volumes. + +#### Provisioner for encrypted volumes + +This provisioner belongs to Encryption and registers with the name [`encryption.pluginName`](deployment/#helm-chart-values) when Encryption is enabled. Use a storage class with this provisioner to create encrypted volumes. + +## Capabilities + +{{}} +| Feature | PowerScale | +| ------- | ---------- | +| Dynamic provisionings of new volumes | Yes | +| Static provisioning of new volumes | Yes | +| Volume snapshot creation | Yes | +| Volume creation from snapshot | Yes | +| Volume cloning | Yes | +| Volume expansion | Yes | +| Encrypted volume unlocking in a different cluster | Yes | +| User file and directory names encryption | Yes | +{{
}} + +## Limitations + +- Only file system volumes are supported. +- Existing volumes with data cannot be encrypted.
+ **Workaround:** create a new encrypted volume of the same size and copy/move the data from the original *unencrypted* volume to the new *encrypted* volume. +- Encryption cannot be disabled in-place.
+ **Workaround:** create a new unencrypted volume of the same size and copy/move the data from the original *encrypted* volume to the new *unencrypted* volume. +- Encrypted volume content can be seen in clear text through root access to the worker node or by obtaining shell access into the Encryption driver container. +- When deployed with PowerScale CSI driver, `controllerCount` has to be set to 1. +- No other CSM component can be enabled simultaneously with Encryption. +- The only supported authentication method for Vault is AppRole. +- Encryption secrets, config maps and encryption related values cannot be updated while the CSI driver is running: +the CSI driver must be restarted to pick up the change. + +## Supported Operating Systems/Container Orchestrator Platforms + +{{}} +| COP/OS | Supported Versions | +|-|-| +| Kubernetes | 1.22, 1.23, 1.24 | +| RHEL | 7.9, 8.4 | +| Ubuntu | 18.04, 20.04 | +| SLES | 15SP2 | +{{
}} + +## Supported Storage Platforms + +{{}} +| | PowerScale | +|---------------|------------| +| Storage Array | OneFS 9.0 | +{{
}} + +## Supported CSI Drivers + +Encryption supports these CSI drivers and versions: +{{}} +| Storage Array | CSI Driver | Supported Versions | +| ------------- | ---------- | ------------------ | +| CSI Driver for Dell PowerScale | [csi-powerscale](https://github.com/dell/csi-powerscale) | v2.4 + | +{{
}} + +### PowerScale + +When enabling Encryption for PowerScale CSI Driver, make sure these requirements are met: +- PowerScale CSI Driver uses root credentials for the storage array where encrypted volumes will be placed +- OneFS NFS export configuration does not have root user mapping enabled +- All other CSM features like Authorization, Replication, Resiliency are disabled +- Health Monitor feature is disabled +- CSI driver `controllerCount` is set to 1 + +## Hashicorp Vault Support + +**Supported Vault version is 1.9.3 and newer.** + +Vault server (or cluster) is typically deployed in a dedicated Kubernetes cluster, but for the purpose of Encryption, it can be located anywhere. +Even the simplest standalone single instance server with in-memory storage will suffice for testing. + +> **NOTE:** Properly deployed and configured Vault is crucial for security of the volumes encrypted with Encryption. +Please refer to the Hashicorp Vault documentation regarding recommended deployment options. + +> **CAUTION:** Compromised Vault server or Vault storage back-end may lead to unauthorized access to the volumes encrypted with Encryption. + +> **CAUTION:** Destroyed Vault storage back-end or the encryption key stored in it, will make it impossible to unlock the volume encrypted with Encryption. +Access to the data will be lost for ever. + +Refer to [Vault Configuration section](vault) for minimal configuration steps required to support Encryption and other configuration considerations. + +## Kubernetes Worker Hosts Requirements + +- Each Kubernetes worker host should have SSH server running. +- SSH server should have SSH public key authentication enabled for user *root*. +- SSH server should remain running all the time whenever an application with an encrypted volume is running on the host. +> **NOTE:** Stopping the SSH server on the worker host makes any encrypted volume attached to this host [inaccessible](troubleshooting#ssh-stopped). +- Each Kubernetes worker host should have commands `fusermount` and `mount.fuse`. They are pre-installed in most Linux distros. +To install package *fuse* in Ubuntu/Debian run command similar to `apt install fuse`. +To install package *fuse* in SUSE run command similar to `zypper install fuse`. + + diff --git a/content/docs/secure/encryption/deployment.md b/content/docs/secure/encryption/deployment.md new file mode 100644 index 0000000000..69dd0b6471 --- /dev/null +++ b/content/docs/secure/encryption/deployment.md @@ -0,0 +1,170 @@ +--- +title: "Deployment" +linkTitle: "Deployment" +weight: 1 +Description: > + Deployment +--- +Encryption is enabled as part of the Dell CSI driver installation. The drivers can be installed either by a Helm chart or by the Dell CSI Operator. +In the tech preview release, Encryption can only be enabled via Helm chart installation. + +Except for additional Encryption related configuration outlined on this page, +the rest of the deployment process is described in the correspondent [CSI driver documentation](../../../csidriver/installation/helm). + +## Vault Server + +Hashicorp Vault must be [pre-configured](../vault) to support Encryption. The Vault server's IP address and port must be accessible +from the Kubernetes cluster where the CSI driver is to be deployed. + +## Helm Chart Values + +The drivers that support Encryption via Helm chart have an `encryption` block in their *values.yaml* file that looks like this: + +```yaml +encryption: + # enabled: Enable/disable volume encryption feature. + enabled: false + + # pluginName: The name of the provisioner to use for encrypted volumes. + pluginName: "sec-isilon.dellemc.com" + + # image: Encryption driver image name. + image: "dellemc/csm-encryption:v0.1.0" + + # imagePullPolicy: If specified, overrides the chart global imagePullPolicy. + imagePullPolicy: + + # logLevel: Log level of the encryption driver. + # Allowed values: "error", "warning", "info", "debug", "trace". + logLevel: "error" + + # livenessPort: HTTP liveness probe port number. + # Leave empty to disable the liveness probe. + # Example: 8080 + livenessPort: + + # extraArgs: Extra command line parameters to pass to the encryption driver. + # Allowed values: + # --sharedStorage - may be required by some applications to work properly. + # When set, performance is reduced and hard links cannot be created. + # See the gocryptfs documentation for more details. + extraArgs: [] +``` + +| Parameter | Description | Required | Default | +| --------- | ----------- | -------- | ------- | +| enabled | Enable/disable volume encryption feature. | No | false | +| pluginName | The name of the provisioner to use for encrypted volumes. | No | "sec-isilon.dellemc.com" | +| image | Encryption driver image name. | No | "dellemc/csm-encryption:v0.1.0" | +| imagePullPolicy | If specified, overrides the chart global imagePullPolicy. | No | CSI driver global imagePullPolicy | +| logLevel | Log level of the encryption driver.
Allowed values: "error", "warning", "info", "debug, `"trace". | No | "error" | +| livenessPort | HTTP liveness probe port number. Leave empty to disable the liveness probe. | No | | +| extraArgs | Extra command line parameters to pass to the encryption driver.
Allowed values:
"\-\-sharedStorage" - may be required by some applications to work properly.
When set, performance is reduced and hard links cannot be created.
See the [gocryptfs documentation](https://github.com/rfjakob/gocryptfs/blob/v2.2.1/Documentation/MANPAGE.md#-sharedstorage) for more details. | No | [] | + +## Secrets and Config Maps + +Apart from any secrets and config maps described in the CSI driver documentation, these resources should be created for Encryption: + +### Secret *encryption-license* + +Request a trial license following instructions on the [License page](../../../license). You will be provided with a YAML file similar to: + +```yaml +apiVersion: v1 +data: + license: k1FXzMDZodGNnK4I12Alo4UvuhLd+ithRhuLz2eoIxlcMSfW0xJYWnBiNMvTUl8VdGmR5fsvs2L6KqPfpIJk4wOzCxQ9wfDIJuYqrwV0wi2F2lzb1Hkk7O7/4r8cblPdCRJWfbg8QFc2BVtl4PZ/pFkHZoZVCbhGDD1MsbI1CiKqva9r9TBfswSFnqv7p3QXgbqQov8/q/j2+sHcvFF3j4kx+q1PzXoRNxwuTQaP4VAvipsQNAU5yV2dos2hs4Y/Ltbtreu/vrRGUaxvPbass1vUtIOJnvKkfbp53j8PFJGGISMYvYylUiD7TpoamxT/1I6mkjgRds+tEciMvutqDpmKEtdyp3vBjt4Sgd07ptvsdBJlyRAYb8ZPX9vXr4Ws +kind: Secret +metadata: + name: edit_name + namespace: edit_namespace +``` + +Set `name` to `"encryption-license"` and `namespace` to your driver namespace and apply the file: + +```shell +kubectl apply -f +``` + +### Secret *vault-auth* + +A secret with the AppRole credentials used by Encryption to authenticate to the Vault server. + +> Set `role_id` and `secret_id` to the values provided by the Vault server administrator. + +> If a self-managed test Vault instance is used, generate role ID and secret ID following [these steps](../vault/#set-role-id-and-secret-id-to-the-role). + +```shell +cat >auth.json <", + "secret_id": "" +} +EOF + +kubectl create secret generic vault-auth -n --from-file=auth.json -o yaml --dry-run=client | kubectl apply -f - + +rm -f auth.json +``` +In this release, Encryption does not pick up modifications to this secret while the CSI driver is running, unless it needs to re-login which happens at: +- CSI Driver startup +- an authentication error from the Vault server +- client token expiration + +In all other cases, to apply new values in the secret (e.g., to use another role), the CSI driver must be restarted. + +### Secret *vault-cert* + +A secret with TLS certificates used by Encryption to communicate with the Vault server. + +> Files *server-ca.crt*, *client.crt* and *client.key* should be in PEM format. + +```shell +kubectl create secret generic vault-cert -n \ + --from-file=server-ca.crt --from-file=client.crt --from-file=client.key \ + -o yaml --dry-run=client | kubectl apply -f - +``` +In this release, Encryption does not pick up modifications to this secret while the CSI driver is running. +To apply new values in the secret (e.g., to update the client certificate), the CSI driver must be restarted. + +### ConfigMap *vault-client-conf* + +A config map with settings used by Encryption to communicate with the Vault server. + +> Populate *client.json* with your settings. + +```shell +cat >client.json <:8400", + "kv_engine_path": "/dea-keys", + "tls_config": + { + "client_crt": "/etc/dea/vault/client.crt", + "client_key": "/etc/dea/vault/client.key", + "server_ca": "/etc/dea/vault/server-ca.crt" + } +} +EOF + +kubectl create configmap vault-client-conf -n \ + --from-file=client.json -o yaml --dry-run=client | kubectl apply -f - + +rm -f client.json +``` + +These fields are available for use in *client.json*: + +| client.json field | Description | Required | Default | +| ----------------- | ----------- | -------- | ------- | +| auth_type | Authentication type used to authenticate to the Vault server. Currently, the only supported type is "approle". | Yes | | +| auth_conf_file | Set to "/etc/dea/vault/auth.json" | Yes | | +| auth_timeout | Defines in how many seconds key requests to the Vault server fail if there is no valid authentication token. | No | 5 | +| lease_duration_margin | Defines how many seconds in advance the authentication token lease will be renewed. This value should accommodate network and processing delays. | No | 15 | +| lease_increase | Defines the number of seconds used in the authentication token renew call. This value is advisory and may be disregarded by the server. | No | 3600 | +| vault_addr | URL to use for REST calls to the Vault server. It must start with "https". | Yes | | +| kv_engine_path | The path to which the Key/Value secret engine is mounted on the Vault server. | Yes | | +| tls_config.client_crt | Set to "/etc/dea/vault/client.crt" | Yes | | +| tls_config.client_key | Set to "/etc/dea/vault/client.key" | Yes | | +| tls_config.client_ca | Set to "/etc/dea/vault/server-ca.crt" | Yes | | diff --git a/content/docs/secure/encryption/release.md b/content/docs/secure/encryption/release.md new file mode 100644 index 0000000000..0ae7f1c450 --- /dev/null +++ b/content/docs/secure/encryption/release.md @@ -0,0 +1,21 @@ +--- +title: "Release Notes" +linkTitle: "Release Notes" +weight: 5 +Description: > + Release Notes +--- + +### New Features/Changes + +- [Technical preview release](https://github.com/dell/csm/issues/437) +- PowerScale CSI volumes encryption (for new volumes) +- Encryption keys stored in Hashicorp Vault + +### Fixed Issues + +There are no fixed issues in this release. + +### Known Issues + +There are no known issues in this release. \ No newline at end of file diff --git a/content/docs/secure/encryption/troubleshooting.md b/content/docs/secure/encryption/troubleshooting.md new file mode 100644 index 0000000000..455a9fb8d2 --- /dev/null +++ b/content/docs/secure/encryption/troubleshooting.md @@ -0,0 +1,87 @@ +--- +title: "Troubleshooting" +linkTitle: "Troubleshooting" +weight: 4 +Description: > + Troubleshooting +--- + +## Logs and Events + +The first and in most cases sufficient step in troubleshooting issues with a CSI driver that has Encryption enabled +is exploring logs of the Encryption driver and related Kubernetes components. These are some useful log sources: + +### CSI Driver Containers Logs + +The driver creates several *controller* and *node* pods. They can be listed with `kubectl -n get pods`. +The output will look similar to: + +``` +NAME READY STATUS RESTARTS AGE +isi-controller-84f697c874-2j6d4 10/10 Running 0 16h +isi-node-4gtwf 4/4 Running 0 16h +isi-node-lnzws 4/4 Running 0 16h +``` + +List containers in pod `isi-node-4gtwf` with `kubectl -n logs isi-node-4gtwf`. +Each pod has containers called `driver` which is the storage driver container and `driver-sec` which is the Encryption driver container. +These container's logs tend to provide the most important information, but other containers may give a hint too. +View the logs of `driver-sec` in `isi-node-4gtwf` with `kubectl -n logs isi-node-4gtwf driver-sec`. +The log level of this container can be changed by setting value [encryption.logLevel](../deployment#helm-chart-values) and restarting the driver. + +Often it is necessary to see the logs produced on a specific Kubernetes worker host. +To find which *node* pod is running on which worker host, use `kubectl -n get pods -o wide`. + +### PersistentVolume, PersistentVolumeClaim and Application Pod Events + +Some errors may be logged to the related resource events that can be viewed with `kubectl describe` command for that resource. + +### Vault Server Logs + +Some errors related to communication with the Vault server and key requests may be logged on the Vault server side. +If you run a [test instance of the server in a Docker container](../vault#vault-server-installation) you can view the logs with `docker logs vault-server`. + +## Typical Failure Reasons + +#### Incorrect Vault related configuration + +- check [logs](#logs-and-events) +- check [vault-auth secret](../deployment#secret-vault-auth) +- check [vault-cert secret](../deployment#secret-vault-cert) +- check [vault-client-conf config map](../deployment#configmap-vault-client-conf) + +#### Incorrect Vault server-side configuration + +- check [logs](#logs-and-events) +- check [Vault server configuration](../vault#minimum-server-configuration) + +#### Expired AppRole secret ID + +- [reset the role secret ID](../vault#set-role-id-and-secret-id-to-the-role) + +#### Incorrect CSI driver configuration + +- check the related CSI driver [troubleshooting steps](../../../csidriver/troubleshooting) + +#### SSH server is stopped/restarted on the worker host {#ssh-stopped} + +This may manifest in: +- failure to start the CSI driver +- failure to create a new encrypted volume +- failure to access an encrypted volume (IO errors) + +Resolution: +- check SSH server is running on all worker host +- stop all workloads that use encrypted volumes on the node, then restart them + +#### No license provided, or license expired + +This may manifest in: +- failure to start the CSI driver +- failure to create a new encrypted volume + +Resolution: +- obtain a [new valid license](../../../license) +- check the license is for the cluster on which the encrypted volumes are created +- check [encryption-license secret](../deployment#secret-encryption-license) + diff --git a/content/docs/secure/encryption/uninstallation.md b/content/docs/secure/encryption/uninstallation.md new file mode 100644 index 0000000000..b6ccc76368 --- /dev/null +++ b/content/docs/secure/encryption/uninstallation.md @@ -0,0 +1,39 @@ +--- +title: "Uninstallation" +linkTitle: "Uninstallation" +weight: 2 +Description: > + Uninstallation +--- + +## Cleanup Kubernetes Worker Hosts + +Login to each worker host and perform these steps: + +#### Remove directory */root/.driver-sec* + +This directory was created when a CSI driver with Encryption first ran on the host. + +#### Remove entry from */root/.ssh/authorized_keys* + +This is an entry added when a CSI driver with Encryption first ran on the host. +It ends with `driver-sec`, similarly to: + +``` +ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQDGvSWmTL7NORRDPAvtbMbvoHUBLnen9bRtJePbGk1boJ4XK39Qdvo2zFHZ/6t2+dSL7xKo2kcxX3ovj3RyOPuqNCob +5CLYyuIqduooy+eSP8S1i0FbiDHvH/52yHglnGkBb8g8fmoMolYGW7k35mKOEItKlXruP5/hpP0rBDfBfrxe/K4aHicxv6GylP+uTSBjdj7bZrdgRAIlmDyIdvU4oU6L +K9PDW5rufArlrZHaToHXLMbXbqswD08rgFt3tLiXjj2GgvU8ifWYYAeuijMp+hwwE0dYv45EgUNTlXUa7x2STFZrVn8MFkLKjtZ60Qjbb4JoijRpBQ5XEUkW9UoeGbV2 +s+lCpZ2bMkmdda/0UC1ckvyrLkD0yQotb8gafizdX+WrQRE+iqUv/NQ2mrSEHtLgvuvgZ3myFU5chRv498YxglYZsAZUdCQI2hQt+7smjYMaM0V200UT741U9lIlYxza +ocI5t+n01dWeVOCSOH/Q3uXxHKnFvWVZh7m6583R9LfdGfwshsnx4CNz22kp69hzwBPxehR+U/VXkDUWnoQgI8NSPc0fFyU58yLHnl91XT9alz8qrkFK7oggKy5RRX7c +VQrpjsCPCu3fpVjvvwfspVOftbn/sNgY1J3lz0pdgvJ3yQs6pa+DODQyin5Rt//19rIGifPxi/Hk/k49Vw== driver-sec +``` + +It can be removed with `sed -i '/^ssh-rsa .* driver-sec$/d' /root/.ssh/authorized_keys`. + +## Remove Kubernetes Resources + +Remove [the resources that were created in Kubernetes cluster for Encryption](../deployment#secrets-and-config-maps). + +## Remove Vault Server Configuration + +Remove [the configuration created in the Vault server for Encryption](../vault#minimum-server-configuration). diff --git a/content/docs/secure/encryption/vault.md b/content/docs/secure/encryption/vault.md new file mode 100644 index 0000000000..734103f64e --- /dev/null +++ b/content/docs/secure/encryption/vault.md @@ -0,0 +1,244 @@ +--- +title: "Vault Configuration" +linkTitle: "Vault Configuration" +weight: 3 +Description: > + Configuration requirements for Vault server +--- + +## Vault Server Installation + +If there is already a Vault server available, skip to [Minimum Server Configuration](#minimum-server-configuration). + +If there is no Vault server available to use with Encryption, it can be installed in many ways following [Hashicorp Vault documentation](https://www.vaultproject.io/docs). + +For testing environment, however, a simple deployment suggested in this section may suffice. +It creates a standalone server with in-memory (non-persistent) storage, running in a Docker container. + +> **NOTE**: With in-memory storage, the encryption keys are permanently destroyed upon the server termination. + +#### Generate TLS certificates for server and client + +Create server CA private key and certificate: + +```shell +openssl req -x509 -sha256 -days 365 -newkey rsa:2048 -nodes \ + -subj "/CN=Vault Root CA" \ + -keyout server-ca.key \ + -out server-ca.crt +``` + +Create server private key and CSR: + +```shell +openssl req -newkey rsa:2048 -nodes \ + -subj "/CN=vault-demo-server" \ + -keyout server.key \ + -out server.csr +``` + +Create server certificate signed by the CA: + +> Replace `` with an IP address by which Encryption can reach the Vault server. +This may be the address of the Docker host where the Vault server will be running. +The same address should be used for `vault_addr` in [vault-client-conf](../deployment#configmap-vault-client-conf). + +```shell +cat > cert.ext < +EOF + +openssl x509 -req \ + -CA server-ca.crt -CAkey server-ca.key \ + -in server.csr \ + -out server.crt \ + -days 365 \ + -extfile cert.ext \ + -CAcreateserial + +cat server-ca.crt >> server.crt +``` + +Create client CA private key and certificate: + +```shell +openssl req -x509 -sha256 -days 365 -newkey rsa:2048 -nodes \ + -subj "/CN=Client Root CA" \ + -keyout client-ca.key \ + -out client-ca.crt +``` + +Create client private key and CSR: + +```shell +openssl req -newkey rsa:2048 -nodes \ + -subj "/CN=vault-client" \ + -keyout client.key \ + -out client.csr +``` + +Create client certificate signed by the CA: + +```shell +cat > cert.ext <> client.crt +``` + +#### Create server hcl file + +```shell +cat >server.hcl < Variable `CONF_DIR` below refers to the directory containing files *server.crt*, *server.key*, *client-ca.crt* and *server.hcl*. +```shell +VOL_DIR="$CONF_DIR" +VOL_DIR_D="/var/vault" +ROOT_TOKEN="DemoRootToken" +VAULT_IMG="vault:1.9.3" + +docker run --rm -d \ + --name="vault-server" \ + -p 8200:8200 -p 8400:8400 \ + -v $VOL_DIR:$VOL_DIR_D -w $VOL_DIR_D \ + -e VAULT_DEV_ROOT_TOKEN_ID=$ROOT_TOKEN \ + -e VAULT_ADDR="http://127.0.0.1:8200" \ + -e VAULT_TOKEN=$ROOT_TOKEN \ + $VAULT_IMG \ + sh -c 'vault server -dev -dev-listen-address 0.0.0.0:8200 -config=server.hcl' +``` + +## Minimum Server Configuration + +> **NOTE:** this configuration is a bare minimum to support Encryption and is not intended for use in production environment. +Refer to the [Hashicorp Vault documentation](https://www.vaultproject.io/docs) for recommended configuration options. + +> If a [test instance of Vault](#vault-server-installation) is used, the `vault` commands below can be executed in the Vault server container shell. +> To enter the shell, run `docker exec -it vault-server sh`. After completing the configuration process, exit the shell by typing `exit`. +> +> Alternatively, you can [download the vault binary](https://www.vaultproject.io/downloads) and run it anywhere. +> It will require two environment variables to communicate with the Vault server: +> - `VAULT_ADDR` - URL similar to `http://127.0.0.1:8200`. You may need to change the address in the URL to the address of +the Docker host where the server is running. +> - `VAULT_TOKEN` - Authentication token, e.g. the root token `DemoRootToken` used in the [test instance of Vault](#vault-server-installation). + +#### Enable Key/Value secret engine + +```shell +vault secrets enable -version=2 -path=dea-keys/ kv +vault write /dea-keys/config cas_required=true max_versions=1 +``` + +Key/Value secret engine is used to store encryption keys. Each encryption key is represented by a key-value entry. + +#### Enable AppRole authentication + +```shell +vault auth enable approle +``` + +#### Create a role + +```shell +vault write auth/approle/role/dea-role \ + secret_id_ttl=28d \ + token_num_uses=0 \ + token_ttl=1h \ + token_max_ttl=1h \ + token_explicit_max_ttl=10d \ + secret_id_num_uses=0 +``` + +TTL values here are chosen arbitrarily and can be changed to desired values. + +#### Create and assign a token policy to the role + +```shell +vault policy write dea-policy - < Secret ID has an expiration time after which it becomes invalid resulting in [authorization failure](../troubleshooting#expired-approle-secret-id). +> The expiration time for new secret IDs can be set in `secret_id_ttl` parameter when [the role is created](#create-a-role) or later on using +> `vault write auth/approle/role/dea-role/secret-id-ttl secret_id_ttl=24h`. + +## Token TTL Considerations + +Effective client token TTL is determined by the Vault server based on multiple factors which are described in the [Vault documentation](https://www.vaultproject.io/docs/concepts/tokens#token-time-to-live-periodic-tokens-and-explicit-max-ttls). + +With the default server settings, role level values control TTL in this way: + +`token_explicit_max_ttl=2h` - limits the client token TTL to 2 hours since it was originally issues as a result of login. This is a hard limit. + +`token_ttl=30m` - sets the default client token TTL to 30 minutes. 30 minutes are counted from the login time and from any following token renewal. +The client token will only be able to renew 3 times before reaching it total allowed TTL of 2 hours. + +Existing role values can be changed using `vault write auth/approle/role/dea-role token_ttl=30m token_explicit_max_ttl=2h`. + +> Selecting too short TTL values will result in excessive overhead in Encryption to remain authenticated to the Vault server. diff --git a/content/docs/snapshots/volume-group-snapshots/_index.md b/content/docs/snapshots/volume-group-snapshots/_index.md index c266498bef..3fcf1f5426 100644 --- a/content/docs/snapshots/volume-group-snapshots/_index.md +++ b/content/docs/snapshots/volume-group-snapshots/_index.md @@ -6,6 +6,8 @@ Description: > Volume Group Snapshot module of Dell CSI drivers --- ## Volume Group Snapshot Feature +The Dell CSM Volume Group Snapshotter is an operator which extends Kubernetes API to support crash-consistent snapshots of groups of volumes. +Volume Group Snapshot supports PowerFlex and PowerStore driver. In order to use Volume Group Snapshots, ensure the volume snapshot module is enabled. - Kubernetes Volume Snapshot CRDs @@ -28,6 +30,7 @@ spec: # "Delete" - delete VolumeSnapshot instances memberReclaimPolicy: "Retain" volumesnapshotclass: "" + timeout: 90sec pvcLabel: "vgs-snap-label" # pvcList: # - "pvcName1" diff --git a/content/docs/support/_index.md b/content/docs/support/_index.md index 458bd392a5..54535f32f8 100644 --- a/content/docs/support/_index.md +++ b/content/docs/support/_index.md @@ -1,7 +1,7 @@ --- title: "Support" linkTitle: "Support" -weight: 11 +weight: 13 Description: > Dell Container Storage Modules (CSM) support --- diff --git a/content/docs/troubleshooting/_index.md b/content/docs/troubleshooting/_index.md index c07a2998c8..f1679aa6b7 100644 --- a/content/docs/troubleshooting/_index.md +++ b/content/docs/troubleshooting/_index.md @@ -1,7 +1,7 @@ --- title: "Troubleshooting" linkTitle: "Troubleshooting" -weight: 10 +weight: 11 Description: > Dell Container Storage Modules (CSM) troubleshooting information --- @@ -16,4 +16,8 @@ Troubleshooting links for Container Storage Modules: [CSM for Replication](../replication/troubleshooting) -[CSM for Resiliency](../resiliency/troubleshooting) \ No newline at end of file +[CSM for Resiliency](../resiliency/troubleshooting) + +[CSM for Encryption](../secure/encryption/troubleshooting) + +[CSM for Application Mobility](../applicationmobility/troubleshooting) \ No newline at end of file diff --git a/content/v1/_index.md b/content/v1/_index.md index 181e677e61..baa4f84ee0 100644 --- a/content/v1/_index.md +++ b/content/v1/_index.md @@ -1,4 +1,3 @@ - --- title: "Documentation" linkTitle: "Documentation" @@ -7,6 +6,7 @@ linkTitle: "Documentation" This document version is no longer actively maintained. The site that you are currently viewing is an archived snapshot. For up-to-date documentation, see the [latest version](/csm-docs/) {{% /pageinfo %}} + The Dell Technologies (Dell) Container Storage Modules (CSM) enables simple and consistent integration and automation experiences, extending enterprise storage capabilities to Kubernetes for cloud-native stateful applications. It reduces management complexity so developers can independently consume enterprise storage with ease and automate daily operations such as provisioning, snapshotting, replication, observability, authorization and, resiliency. CSM Hex Diagram @@ -17,23 +17,23 @@ CSM is made up of multiple components including modules (enterprise capabilities ## CSM Supported Modules and Dell CSI Drivers -| Modules/Drivers | CSM 1.2.1 | [CSM 1.2](../v1/) | [CSM 1.1](../v1/) | [CSM 1.0.1](../v2/) | +| Modules/Drivers | CSM 1.3 | [CSM 1.2.1](../v1/) | [CSM 1.2](../v2/) | [CSM 1.1](../v3/) | | - | :-: | :-: | :-: | :-: | -| [Authorization](https://hub.docker.com/r/dellemc/csm-authorization-sidecar) | 1.2 | 1.2 | 1.1 | 1.0 | -| [Observability](https://hub.docker.com/r/dellemc/csm-topology) | 1.1.1 | 1.1 | 1.0.1 | 1.0.1 | -| [Replication](https://hub.docker.com/r/dellemc/dell-csi-replicator) | 1.2 | 1.2 | 1.1 | 1.0 | -| [Resiliency](https://hub.docker.com/r/dellemc/podmon) | 1.1 | 1.1 | 1.0.1 | 1.0.1 | -| [CSI Driver for PowerScale](https://hub.docker.com/r/dellemc/csi-isilon/tags) | v2.2 | v2.2 | v2.1 | v2.0 | -| [CSI Driver for Unity](https://hub.docker.com/r/dellemc/csi-unity/tags) | v2.2 | v2.2 | v2.1 | v2.0 | -| [CSI Driver for PowerStore](https://hub.docker.com/r/dellemc/csi-powerstore/tags) | v2.2 | v2.2 | v2.1 | v2.0 | -| [CSI Driver for PowerFlex](https://hub.docker.com/r/dellemc/csi-vxflexos/tags) | v2.2 | v2.2 | v2.1 | v2.0 | -| [CSI Driver for PowerMax](https://hub.docker.com/r/dellemc/csi-powermax/tags) | v2.2 | v2.2 | v2.1 | v2.0 | +| [Authorization](https://hub.docker.com/r/dellemc/csm-authorization-sidecar) | v1.3.0 | v1.2.0 | v1.2.0 | v1.1.0 | +| [Observability](https://hub.docker.com/r/dellemc/csm-topology) | v1.2.0 | v1.1.1 | v1.1.0 | v1.0.1 | +| [Replication](https://hub.docker.com/r/dellemc/dell-csi-replicator) | v1.3.0 | v1.2.0 | v1.2.0 | v1.1.0 | +| [Resiliency](https://hub.docker.com/r/dellemc/podmon) | v1.2.0 | v1.1.0 | v1.1.0 | v1.0.1 | +| [CSI Driver for PowerScale](https://hub.docker.com/r/dellemc/csi-isilon/tags) | v2.3.0 | v2.2.0 | v2.2.0 | v2.1.0 | +| [CSI Driver for Unity XT](https://hub.docker.com/r/dellemc/csi-unity/tags) | v2.3.0 | v2.2.0 | v2.2.0 | v2.1.0 | +| [CSI Driver for PowerStore](https://hub.docker.com/r/dellemc/csi-powerstore/tags) | v2.3.0 | v2.2.0 | v2.2.0| v2.1.0 | +| [CSI Driver for PowerFlex](https://hub.docker.com/r/dellemc/csi-vxflexos/tags) | v2.3.0 | v2.2.0 | v2.2.0 | v2.1.0 | +| [CSI Driver for PowerMax](https://hub.docker.com/r/dellemc/csi-powermax/tags) | v2.3.0 | v2.2.0 | v2.2.0 | v2.1.0 | ## CSM Modules Support Matrix for Dell CSI Drivers -| CSM Module | CSI PowerFlex v2.2 | CSI PowerScale v2.2 | CSI PowerStore v2.2 | CSI PowerMax v2.2 | CSI Unity XT v2.2 | +| CSM Module | CSI PowerFlex v2.3.0 | CSI PowerScale v2.3.0 | CSI PowerStore v2.3.0 | CSI PowerMax v2.3.0 | CSI Unity XT v2.3.0 | | ----------------- | -------------- | --------------- | --------------- | ------------- | --------------- | -| Authorization v1.2| ✔️ | ✔️ | ❌ | ✔️ | ❌ | -| Observability v1.1.1 | ✔️ | ❌ | ✔️ | ❌ | ❌ | -| Replication v1.2| ❌ | ✔️ | ✔️ | ✔️ | ❌ | -| Resilency v1.1| ✔️ | ❌ | ❌ | ❌ | ✔️ | \ No newline at end of file +| Authorization v1.3| ✔️ | ✔️ | ❌ | ✔️ | ❌ | +| Observability v1.2| ✔️ | ❌ | ✔️ | ❌ | ❌ | +| Replication v1.3| ❌ | ✔️ | ✔️ | ✔️ | ❌ | +| Resiliency v1.2| ✔️ | ✔️ | ❌ | ❌ | ✔️ | diff --git a/content/v1/authorization/_index.md b/content/v1/authorization/_index.md index 0310e936d6..744d4918eb 100644 --- a/content/v1/authorization/_index.md +++ b/content/v1/authorization/_index.md @@ -20,7 +20,7 @@ The following diagram shows a high-level overview of CSM for Authorization with ## CSM for Authorization Capabilities {{}} -| Feature | PowerFlex | PowerMax | PowerScale | Unity | PowerStore | +| Feature | PowerFlex | PowerMax | PowerScale | Unity XT | PowerStore | | - | - | - | - | - | - | | Ability to set storage quota limits to ensure k8s tenants are not overconsuming storage | Yes | Yes | No (natively supported) | No | No | | Ability to create access control policies to ensure k8s tenant clusters are not accessing storage that does not belong to them | Yes | Yes | No (natively supported) | No | No | @@ -33,8 +33,7 @@ The following diagram shows a high-level overview of CSM for Authorization with {{
}} | COP/OS | Supported Versions | |-|-| -| Kubernetes | 1.21, 1.22, 1.23 | -| Red Hat OpenShift | 4.8, 4.9| +| Kubernetes | 1.22, 1.23, 1.24 | | RHEL | 7.x, 8.x | | CentOS | 7.8, 7.9 | {{
}} @@ -53,9 +52,9 @@ CSM for Authorization supports the following CSI drivers and versions. {{}} | Storage Array | CSI Driver | Supported Versions | | ------------- | ---------- | ------------------ | -| CSI Driver for Dell PowerFlex | [csi-powerflex](https://github.com/dell/csi-powerflex) | v2.0, v2.1, v2.2 | -| CSI Driver for Dell PowerMax | [csi-powermax](https://github.com/dell/csi-powermax) | v2.0, v2.1 ,v2.2 | -| CSI Driver for Dell PowerScale | [csi-powerscale](https://github.com/dell/csi-powerscale) | v2.0, v2.1, v2.2 | +| CSI Driver for Dell PowerFlex | [csi-powerflex](https://github.com/dell/csi-powerflex) | v2.0 + | +| CSI Driver for Dell PowerMax | [csi-powermax](https://github.com/dell/csi-powermax) | v2.0 + | +| CSI Driver for Dell PowerScale | [csi-powerscale](https://github.com/dell/csi-powerscale) | v2.0 + | {{
}} **NOTE:** If the deployed CSI driver has a number of controller pods equal to the number of schedulable nodes in your cluster, CSM for Authorization may not be able to inject properly into the driver's controller pod. @@ -69,6 +68,7 @@ CSM for Authorization consists of 2 components - the Authorization sidecar and t | ------------------------------- | ---------------------------------- | | dellemc/csm-authorization-sidecar:v1.0.0 | v1.0.0, v1.1.0 | | dellemc/csm-authorization-sidecar:v1.2.0 | v1.1.0, v1.2.0 | +| dellemc/csm-authorization-sidecar:v1.3.0 | v1.1.0, v1.2.0, v1.3.0 | {{}} ## Roles and Responsibilities diff --git a/content/v1/authorization/cli.md b/content/v1/authorization/cli.md index f1ef1bb5aa..b282d7c3fd 100644 --- a/content/v1/authorization/cli.md +++ b/content/v1/authorization/cli.md @@ -25,6 +25,7 @@ If you feel that something is unclear or missing in this document, please open u | [karavictl role delete](#karavictl-role-delete ) | Delete role | | [karavictl rolebinding](#karavictl-rolebinding) | Manage role bindings | | [karavictl rolebinding create](#karavictl-rolebinding-create) | Create a rolebinding between role and tenant | +| [karavictl rolebinding delete](#karavictl-rolebinding-delete) | Delete a rolebinding between role and tenant | | [karavictl storage](#karavictl-storage) | Manage storage systems | | [karavictl storage get](#karavictl-storage-get) | Get details on a registered storage system | | [karavictl storage list](#karavictl-storage-list) | List registered storage systems | @@ -35,7 +36,7 @@ If you feel that something is unclear or missing in this document, please open u | [karavictl tenant create](#karavictl-tenant-create) | Create a tenant resource within CSM | | [karavictl tenant get](#karavictl-tenant-get) | Get a tenant resource within CSM | | [karavictl tenant list](#karavictl-tenant-list) | Lists tenant resources within CSM | -| [karavictl tenant get](#karavictl-tenant-get) | Get a tenant resource within CSM | +| [karavictl tenant revoke](#karavictl-tenant-revoke) | Get a tenant resource within CSM | | [karavictl tenant delete](#karavictl-tenant-delete) | Deletes a tenant resource within CSM | @@ -538,7 +539,46 @@ karavictl rolebinding create [flags] ``` $ karavictl rolebinding create --role CSISilver --tenant Alice ``` -On success, there will be no output. You may run `karavictl tenant get ` to confirm the rolebinding creation occurred. +On success, there will be no output. You may run `karavictl tenant get --name ` to confirm the rolebinding creation occurred. + + +--- + + + +### karavictl rolebinding delete + +Delete a rolebinding between role and tenant + +##### Synopsis + +Deletes a rolebinding between role and tenant + +``` +karavictl rolebinding delete [flags] +``` + +##### Options + +``` + -h, --help help for create + -r, --role string Role name + -t, --tenant string Tenant name +``` + +##### Options inherited from parent commands + +``` + --addr string Address of the server (default "localhost:443") + --config string config file (default is $HOME/.karavictl.yaml) +``` + +##### Output + +``` +$ karavictl rolebinding delete --role CSISilver --tenant Alice +``` +On success, there will be no output. You may run `karavictl tenant get --name ` to confirm the rolebinding deletion occurred. @@ -802,7 +842,7 @@ Manage tenants ##### Synopsis -Management fortenants +Management for tenants ``` karavictl tenant [flags] @@ -875,7 +915,7 @@ Get a tenant resource within CSM ##### Synopsis -Gets a tenant resource within CSM +Gets a tenant resource and its assigned roles within CSM ``` karavictl tenant get [flags] @@ -902,6 +942,7 @@ $ karavictl tenant get --name Alice { "name": "Alice" + "roles": "role-1,role-2" } ``` @@ -958,6 +999,44 @@ $ karavictl tenant list +### karavictl tenant revoke + +Revokes access for a tenant + +##### Synopsis + +Revokes access to storage resources for a tenant + +``` +karavictl tenant revoke [flags] +``` + +##### Options + +``` + -h, --help help for create + -n, --name string Tenant name +``` + +##### Options inherited from parent commands + +``` + --addr string Address of the server (default "localhost:443") + --config string config file (default is $HOME/.karavictl.yaml) +``` + +##### Output +``` +$ karavictl tenant revoke --name Alice +``` +On success, there will be no output. + + + +--- + + + ### karavictl tenant delete Deletes a tenant resource within CSM @@ -988,4 +1067,4 @@ karavictl tenant delete [flags] ``` $ karavictl tenant delete --name Alice ``` -On success, there will be no output. You may run `karavictl tenant get --name ` to confirm the deletion occurred. \ No newline at end of file +On success, there will be no output. You may run `karavictl tenant get --name ` to confirm the deletion occurred. diff --git a/content/v1/authorization/deployment.md b/content/v1/authorization/deployment.md deleted file mode 100644 index b2c11a53a0..0000000000 --- a/content/v1/authorization/deployment.md +++ /dev/null @@ -1,274 +0,0 @@ ---- -title: Deployment -linktitle: Deployment -weight: 2 -description: > - Dell EMC Container Storage Modules (CSM) for Authorization deployment ---- - -This section outlines the deployment steps for Container Storage Modules (CSM) for Authorization. The deployment of CSM for Authorization is handled in 2 parts: -- Deploying the CSM for Authorization proxy server, to be controlled by storage administrators -- Configuring one to many [supported](../../authorization#supported-csi-drivers) Dell EMC CSI drivers with CSM for Authorization - -## Prerequisites - -The CSM for Authorization proxy server requires a Linux host with the following minimum resource allocations: -- 32 GB of memory -- 4 CPU -- 200 GB local storage - -## Deploying the CSM Authorization Proxy Server - -The first part deploying CSM for Authorization is installing the proxy server. This activity and the administration of the proxy server will be owned by the storage administrator. - -The CSM for Authorization proxy server is installed using a single binary installer. - -### Single Binary Installer - -The easiest way to obtain the single binary installer RPM is directly from the [GitHub repository's releases](https://github.com/dell/karavi-authorization/releases) section. - -The single binary installer can also be built from source by cloning the [GitHub repository](https://github.com/dell/karavi-authorization) and using the following Makefile targets to build the installer: - -``` -make dist build-installer rpm -``` - -The `build-installer` step creates a binary at `bin/deploy` and embeds all components required for installation. The `rpm` step generates an RPM package and stores it at `deploy/rpm/x86_64/`. -This allows CSM for Authorization to be installed in network-restricted environments. - -A Storage Administrator can execute the installer or rpm package as a root user or via `sudo`. - -### Installing the RPM - -1. Before installing the rpm, some network and security configuration inputs need to be provided in json format. The json file should be created in the location `$HOME/.karavi/config.json` having the following contents: - - ```json - { - "web": { - "sidecarproxyaddr": "docker_registry/sidecar-proxy:latest", - "jwtsigningsecret": "secret" - }, - "proxy": { - "host": ":8080" - }, - "zipkin": { - "collectoruri": "http://DNS_host_name:9411/api/v2/spans", - "probability": 1 - }, - "certificate": { - "keyFile": "path_to_private_key_file", - "crtFile": "path_to_host_cert_file", - "rootCertificate": "path_to_root_CA_file" - }, - "hostName": "DNS_host_name" - } - ``` - - In the above template, `DNS_host_name` refers to the hostname of the system in which the CSM for Authorization server will be installed. This hostname can be found by running the below command on the system: - - ``` - nslookup - ``` - -2. In order to configure secure grpc connectivity, an additional subdomain in the format `grpc.DNS_host_name` is also required. All traffic from `grpc.DNS_host_name` needs to be routed to `DNS_host_name` address, this can be configured by adding a new DNS entry for `grpc.DNS_host_name` or providing a temporary path in the `/etc/hosts` file. - - **NOTE:** The certificate provided in `crtFile` should be valid for both the `DNS_host_name` and the `grpc.DNS_host_name` address. - - For example, create the certificate config file with alternate names (to include example.com and grpc.example.com) and then create the .crt file: - - ``` - CN = example.com - subjectAltName = @alt_names - [alt_names] - DNS.1 = grpc.example.com - - openssl x509 -req -in cert_request_file.csr -CA root_CA.pem -CAkey private_key_File.key -CAcreateserial -out example.com.crt -days 365 -sha256 - ``` - -3. To install the rpm package on the system, run the below command: - - ```shell - rpm -ivh - ``` - -4. After installation, application data will be stored on the system under `/var/lib/rancher/k3s/storage/`. - -## Configuring the CSM for Authorization Proxy Server - -The storage administrator must first configure the proxy server with the following: -- Storage systems -- Tenants -- Roles -- Bind roles to tenants - -Run the following commands on the Authorization proxy server: - - ```console - # Specify any desired name - export RoleName="" - export RoleQuota="" - export TenantName="" - - # Specify info about Array1 - export Array1Type="" - export Array1SystemID="" - export Array1User="" - export Array1Password="" - export Array1Pool="" - export Array1Endpoint="" - - # Specify info about Array2 - export Array2Type="" - export Array2SystemID="" - export Array2User="" - export Array2Password="" - export Array2Pool="" - export Array2Endpoint="" - - # Specify IPs - export DriverHostVMIP="" - export DriverHostVMPassword="" - export DriverHostVMUser="" - - # Specify Authorization host address. NOTE: this is not the same as IP - export AuthorizationHost="" - - echo === Creating Storage(s) === - # Add array1 to authorization - karavictl storage create \ - --type ${Array1Type} \ - --endpoint ${Array1Endpoint} \ - --system-id ${Array1SystemID} \ - --user ${Array1User} \ - --password ${Array1Password} \ - --insecure - - # Add array2 to authorization - karavictl storage create \ - --type ${Array2Type} \ - --endpoint ${Array2Endpoint} \ - --system-id ${Array2SystemID} \ - --user ${Array2User} \ - --password ${Array2Password} \ - --insecure - - echo === Creating Tenant === - karavictl tenant create -n $TenantName --insecure --addr "grpc.${AuthorizationHost}" - - echo === Creating Role === - karavictl role create \ - --role=${RoleName}=${Array1Type}=${Array1SystemID}=${Array1Pool}=${RoleQuota} \ - --role=${RoleName}=${Array2Type}=${Array2SystemID}=${Array2Pool}=${RoleQuota} - - echo === === Binding Role === - karavictl rolebinding create --tenant $TenantName --role $RoleName --insecure --addr "grpc.${AuthorizationHost}" - ``` - -### Generate a Token - -After creating the role bindings, the next logical step is to generate the access token. The storage admin is responsible for generating and sending the token to the Kubernetes tenant admin. - - ``` - echo === Generating token === - karavictl generate token --tenant $TenantName --insecure --addr "grpc.${AuthorizationHost}" | jq -r '.Token' > token.yaml - - echo === Copy token to Driver Host === - sshpass -p $DriverHostPassword scp token.yaml ${DriverHostVMUser}@{DriverHostVMIP}:/tmp/token.yaml - ``` - -**Note:** The sample above copies the token directly to the Kubernetes cluster master node. The requirement here is that the token must be copied and/or stored in any location accessible to the Kubernetes tenant admin. - -### Copy the karavictl Binary to the Kubernetes Master Node - -The karavictl binary is available from the CSM for Authorization proxy server. This needs to be copied to the Kubernetes master node where Kubernetes tenant admins so they configure the Dell EMC CSI driver with CSM for Authorization. - -``` -sshpass -p dangerous scp bin/karavictl root@10.247.96.174:/tmp/karavictl -``` - -**Note:** The storage admin is responsible for copying the binary to a location accessible by the Kubernetes tenant admin. - -## Configuring a Dell EMC CSI Driver with CSM for Authorization - -The second part of CSM for Authorization deployment is to configure one or more of the [supported](../../authorization#supported-csi-drivers). This is controlled by the Kubernetes tenant admin. - -There are currently 2 ways of doing this: -- Using the [CSM Installer](../../deployment) (*Recommended installation method*) -- Manually by following the steps [below](#configuring-a-dell-emc-csi-driver) - -### Configuring a Dell EMC CSI Driver - -Given a setup where Kubernetes, a storage system, CSI driver(s), and CSM for Authorization are deployed, follow the steps below to configure the CSI Drivers to work with the Authorization sidecar: - -Run the following commands on the CSI Driver host - - ```console - # Specify Authorization host address. NOTE: this is not the same as IP - export AuthorizationHost="" - - echo === Applying token token === - # It is assumed that array type powermax has the namespace "powermax" and powerflex has the namepace "vxflexos" - kubectl apply -f /tmp/token.yaml -n powermax - kubectl apply -f /tmp/token.yaml -n vxflexos - - echo === injecting sidecar in all CSI driver hosts that token has been applied to === - sudo curl -k https://${AuthorizationHost}/install | sh - - # NOTE: you can also query parameters("namespace" and "proxy-port") with the curl url if you desire a specific behavior. - # 1) For instance, if you want to inject into just powermax, you can run - # sudo curl -k https://${AuthorizationHost}/install?namespace=powermax | sh - # 2) If you want to specify the proxy-port for powermax to be 900001, you can run - # sudo curl -k https://${AuthorizationHost}/install?proxy-port=powermax:900001 | sh - # 3) You can mix behaviors - # sudo curl -k https://${AuthorizationHost}/install?namespace=powermax&proxy-port=powermax:900001&namespace=vxflexos | sh - ``` - -## Updating CSM for Authorization Proxy Server Configuration - -CSM for Authorization has a subset of configuration parameters that can be updated dynamically: - -| Parameter | Type | Default | Description | -| --------- | ---- | ------- | ----------- | -| certificate.crtFile | String | "" |Path to the host certificate file | -| certificate.keyFile | String | "" |Path to the host private key file | -| certificate.rootCertificate | String | "" |Path to the root CA file | -| web.sidecarproxyaddr | String |"127.0.0.1:5000/sidecar-proxy:latest" |Docker registry address of the CSM for Authorization sidecar-proxy | -| web.jwtsigningsecret | String | "secret" |The secret used to sign JWT tokens | - -Updating configuration parameters can be done by editing the `karavi-config-secret` on the CSM for the Authorization Server. The secret can be queried using k3s and kubectl like so: - -`k3s kubectl -n karavi get secret/karavi-config-secret` - -To update or add parameters, you must edit the base64 encoded data in the secret. The` karavi-config-secret` data can be decoded like so: - -`k3s kubectl -n karavi get secret/karavi-config-secret -o yaml | grep config.yaml | head -n 1 | awk '{print $2}' | base64 -d` - -Save the output to a file or copy it to an editor to make changes. Once you are done with the changes, you must encode the data to base64. If your changes are in a file, you can encode it like so: - -`cat | base64` - -Copy the new, encoded data and edit the `karavi-config-secret` with the new data. Run this command to edit the secret: - -`k3s kubectl -n karavi edit secret/karavi-config-secret` - -Replace the data in `config.yaml` under the `data` field with your new, encoded data. Save the changes and CSM for Authorization will read the changed secret. - -__Note:__ If you are updating the signing secret, the tenants need to be updated with new tokens via the `karavictl generate token` command like so: - -`karavictl generate token --tenant $TenantName --insecure --addr "grpc.${AuthorizationHost}" | jq -r '.Token' > kubectl -n $namespace apply -f -` - -## CSM for Authorization Proxy Server Dynamic Configuration Settings - -Some settings are not stored in the `karavi-config-secret` but in the csm-config-params ConfigMap, such as LOG_LEVEL and LOG_FORMAT. To update the CSM for Authorization logging settings during runtime, run the below command on the K3s cluster, make your changes, and save the updated configmap data. - -``` -k3s kubectl -n karavi edit configmap/csm-config-params -``` - -This edit will not update the logging level for the sidecar-proxy containers running in the CSI Driver pods. To update the sidecar-proxy logging levels, you must update the associated CSI Driver ConfigMap in a similar fashion: - -``` -kubectl -n [CSM_CSI_DRVIER_NAMESPACE] edit configmap/-config-params -``` - -Using PowerFlex as an example, `kubectl -n vxflexos edit configmap/vxflexos-config-params` can be used to update the logging level of the sidecar-proxy and the driver. \ No newline at end of file diff --git a/content/v1/authorization/deployment/_index.md b/content/v1/authorization/deployment/_index.md index ca15cb03da..5ff8a907d1 100644 --- a/content/v1/authorization/deployment/_index.md +++ b/content/v1/authorization/deployment/_index.md @@ -1,344 +1,11 @@ --- title: Deployment -linktitle: Deployment +linktitle: Deployment weight: 2 -description: > - Dell Technologies (Dell) Container Storage Modules (CSM) for Authorization deployment +description: Methods to install CSM Authorization +tags: + - install + - csm-authorization --- -This section outlines the deployment steps for Container Storage Modules (CSM) for Authorization. The deployment of CSM for Authorization is handled in 2 parts: -- Deploying the CSM for Authorization proxy server, to be controlled by storage administrators -- Configuring one to many [supported](../../authorization#supported-csi-drivers) Dell CSI drivers with CSM for Authorization - -## Prerequisites - -The CSM for Authorization proxy server requires a Linux host with the following minimum resource allocations: -- 32 GB of memory -- 4 CPU -- 200 GB local storage - -## Deploying the CSM Authorization Proxy Server - -The first part of deploying CSM for Authorization is installing the proxy server. This activity and the administration of the proxy server will be owned by the storage administrator. - -The CSM for Authorization proxy server is installed using a single binary installer. - -### Single Binary Installer - -The easiest way to obtain the single binary installer RPM is directly from the [GitHub repository's releases](https://github.com/dell/karavi-authorization/releases) section. - -Alternatively, the single binary installer can be built from source by cloning the [GitHub repository](https://github.com/dell/karavi-authorization) and using the following Makefile targets to build the installer: - -``` -make dist build-installer rpm -``` - -The `build-installer` step creates a binary at `karavi-authorization/bin/deploy` and embeds all components required for installation. The `rpm` step generates an RPM package and stores it at `karavi-authorization/deploy/rpm/x86_64/`. -This allows CSM for Authorization to be installed in network-restricted environments. - -A Storage Administrator can execute the installer or rpm package as a root user or via `sudo`. - -### Installing the RPM - -1. Before installing the rpm, some network and security configuration inputs need to be provided in json format. The json file should be created in the location `$HOME/.karavi/config.json` having the following contents: - - ```json - { - "web": { - "jwtsigningsecret": "secret" - }, - "proxy": { - "host": ":8080" - }, - "zipkin": { - "collectoruri": "http://DNS-hostname:9411/api/v2/spans", - "probability": 1 - }, - "certificate": { - "keyFile": "path_to_private_key_file", - "crtFile": "path_to_host_cert_file", - "rootCertificate": "path_to_root_CA_file" - }, - "hostname": "DNS-hostname" - } - ``` - - In an instance where a secure deployment is not required, an insecure deployment is possible. Please note that self-signed certificates will be created for you using cert-manager to allow TLS encryption for communication on the CSM for Authorization proxy server. However, this is not recommended for production environments. For an insecure deployment, the json file in the location `$HOME/.karavi/config.json` only requires the following contents: - - ```json - { - "hostname": "DNS-hostname" - } - ``` - ->__Note__: -> - `DNS-hostname` refers to the hostname of the system in which the CSM for Authorization server will be installed. This hostname can be found by running `nslookup ` -> - There are a number of ways to create certificates. In a production environment, certificates are usually created and managed by an IT administrator. Otherwise, certificates can be created using OpenSSL. - -2. In order to configure secure grpc connectivity, an additional subdomain in the format `grpc.DNS-hostname` is also required. All traffic from `grpc.DNS-hostname` needs to be routed to `DNS-hostname` address, this can be configured by adding a new DNS entry for `grpc.DNS-hostname` or providing a temporary path in the systems `/etc/hosts` file. - ->__Note__: The certificate provided in `crtFile` should be valid for both the `DNS-hostname` and the `grpc.DNS-hostname` address. - - For example, create the certificate config file with alternate names (to include DNS-hostname and grpc.DNS-hostname) and then create the .crt file: - - ``` - CN = DNS-hostname - subjectAltName = @alt_names - [alt_names] - DNS.1 = grpc.DNS-hostname.com - - $ openssl x509 -req -in cert_request_file.csr -CA root_CA.pem -CAkey private_key_File.key -CAcreateserial -out DNS-hostname.com.crt -days 365 -sha256 - ``` - -3. To install the rpm package on the system, run the below command: - - ```shell - rpm -ivh - ``` - -4. After installation, application data will be stored on the system under `/var/lib/rancher/k3s/storage/`. - -## Configuring the CSM for Authorization Proxy Server - -The storage administrator must first configure the proxy server with the following: -- Storage systems -- Tenants -- Roles -- Bind roles to tenants - -Run the following commands on the Authorization proxy server: ->__Note__: The `--insecure` flag is only necessary if certificates were not provided in `$HOME/.karavi/config.json`. - - ```console - # Specify any desired name - export RoleName="" - export RoleQuota="" - export TenantName="" - - # Specify info about Array1 - export Array1Type="" - export Array1SystemID="" - export Array1User="" - export Array1Password="" - export Array1Pool="" - export Array1Endpoint="" - - # Specify info about Array2 - export Array2Type="" - export Array2SystemID="" - export Array2User="" - export Array2Password="" - export Array2Pool="" - export Array2Endpoint="" - - # Specify IPs - export DriverHostVMIP="" - export DriverHostVMPassword="" - export DriverHostVMUser="" - - # Specify Authorization proxy host address. NOTE: this is not the same as IP - export AuthorizationProxyHost="" - - echo === Creating Storage(s) === - # Add array1 to authorization - karavictl storage create \ - --type ${Array1Type} \ - --endpoint ${Array1Endpoint} \ - --system-id ${Array1SystemID} \ - --user ${Array1User} \ - --password ${Array1Password} \ - --insecure - - # Add array2 to authorization - karavictl storage create \ - --type ${Array2Type} \ - --endpoint ${Array2Endpoint} \ - --system-id ${Array2SystemID} \ - --user ${Array2User} \ - --password ${Array2Password} \ - --insecure - - echo === Creating Tenant === - karavictl tenant create -n $TenantName --insecure --addr "grpc.${AuthorizationProxyHost}" - - echo === Creating Role === - karavictl role create \ - --role=${RoleName}=${Array1Type}=${Array1SystemID}=${Array1Pool}=${RoleQuota} \ - --role=${RoleName}=${Array2Type}=${Array2SystemID}=${Array2Pool}=${RoleQuota} - - echo === === Binding Role === - karavictl rolebinding create --tenant $TenantName --role $RoleName --insecure --addr "grpc.${AuthorizationProxyHost}" - ``` - -### Generate a Token - -After creating the role bindings, the next logical step is to generate the access token. The storage admin is responsible for generating and sending the token to the Kubernetes tenant admin. - ->__Note__: -> - The `--insecure` flag is only necessary if certificates were not provided in `$HOME/.karavi/config.json`. -> - This sample copies the token directly to the Kubernetes cluster master node. The requirement here is that the token must be copied and/or stored in any location accessible to the Kubernetes tenant admin. - - ``` - echo === Generating token === - karavictl generate token --tenant $TenantName --insecure --addr "grpc.${AuthorizationProxyHost}" | jq -r '.Token' > token.yaml - - echo === Copy token to Driver Host === - sshpass -p $DriverHostPassword scp token.yaml ${DriverHostVMUser}@{DriverHostVMIP}:/tmp/token.yaml - ``` - -### Copy the karavictl Binary to the Kubernetes Master Node - -The karavictl binary is available from the CSM for Authorization proxy server. This needs to be copied to the Kubernetes master node for Kubernetes tenant admins so the Kubernetes tenant admins can configure the Dell CSI driver with CSM for Authorization. - -``` -sshpass -p dangerous scp bin/karavictl root@10.247.96.174:/tmp/karavictl -``` - ->__Note__: The storage admin is responsible for copying the binary to a location accessible by the Kubernetes tenant admin. - -## Configuring a Dell CSI Driver with CSM for Authorization - -The second part of CSM for Authorization deployment is to configure one or more of the [supported](../../authorization#supported-csi-drivers) CSI drivers. This is controlled by the Kubernetes tenant admin. - -### Configuring a Dell CSI Driver - -Given a setup where Kubernetes, a storage system, and the CSM for Authorization Proxy Server are deployed, follow the steps below to configure the CSI Drivers to work with the Authorization sidecar: - -1. Create the secret token in the namespace of the driver. - - ```console - # It is assumed that array type powermax has the namespace "powermax", powerflex has the namepace "vxflexos", and powerscale has the namespace "isilon". - kubectl apply -f /tmp/token.yaml -n powermax - kubectl apply -f /tmp/token.yaml -n vxflexos - kubectl apply -f /tmp/token.yaml -n isilon - ``` - -2. Edit the following parameters in samples/secret/karavi-authorization-config.json file in [CSI PowerFlex](https://github.com/dell/csi-powerflex/tree/main/samples), [CSI PowerMax](https://github.com/dell/csi-powermax/tree/main/samples/secret), or [CSI PowerScale](https://github.com/dell/csi-powerscale/tree/main/samples/secret) driver and update/add connection information for one or more backend storage arrays. In an instance where multiple CSI drivers are configured on the same Kubernetes cluster, the port range in the *endpoint* parameter must be different for each driver. - - | Parameter | Description | Required | Default | - | --------- | ----------- | -------- |-------- | - | username | Username for connecting to the backend storage array. This parameter is ignored. | No | - | - | password | Password for connecting to to the backend storage array. This parameter is ignored. | No | - | - | intendedEndpoint | HTTPS REST API endpoint of the backend storage array. | Yes | - | - | endpoint | HTTPS localhost endpoint that the authorization sidecar will listen on. | Yes | https://localhost:9400 | - | systemID | System ID of the backend storage array. | Yes | " " | - | insecure | A boolean that enables/disables certificate validation of the backend storage array. This parameter is not used. | No | true | - | isDefault | A boolean that indicates if the array is the default array. This parameter is not used. | No | default value from values.yaml | - - -Create the karavi-authorization-config secret using the following command: - -`kubectl -n [CSI_DRIVER_NAMESPACE] create secret generic karavi-authorization-config --from-file=config=samples/secret/karavi-authorization-config.json -o yaml --dry-run=client | kubectl apply -f -` - ->__Note__: -> - Create the driver secret as you would normally except update/add the connection information for communicating with the sidecar instead of the backend storage array and scrub the username and password -> - For PowerScale, the *systemID* will be the *clusterName* of the array. -> - The *isilon-creds* secret has a *mountEndpoint* parameter which must be set to the hostname or IP address of the PowerScale OneFS API server, for example, 10.0.0.1. -3. Create the proxy-server-root-certificate secret. - - If running in *insecure* mode, create the secret with empty data: - - `kubectl -n [CSI_DRIVER_NAMESPACE] create secret generic proxy-server-root-certificate --from-literal=rootCertificate.pem= -o yaml --dry-run=client | kubectl apply -f -` - - Otherwise, create the proxy-server-root-certificate secret with the appropriate file: - - `kubectl -n [CSI_DRIVER_NAMESPACE] create secret generic proxy-server-root-certificate --from-file=rootCertificate.pem=/path/to/rootCA -o yaml --dry-run=client | kubectl apply -f -` - - ->__Note__: Follow the steps below for additional configurations to one or more of the supported CSI drivers. -#### PowerFlex - -Please refer to step 5 in the [installation steps for PowerFlex](../../csidriver/installation/helm/powerflex) to edit the parameters in samples/config.yaml file to communicate with the sidecar. - -1. Update *endpoint* to match the endpoint set in samples/secret/karavi-authorization-config.json - -2. Create vxflexos-config secret using the following command: - - `kubectl create secret generic vxflexos-config -n vxflexos --from-file=config=config.yaml -o yaml --dry-run=client | kubectl apply -f -` - -Please refer to step 9 in the [installation steps for PowerFlex](../../csidriver/installation/helm/powerflex) to edit the parameters in *myvalues.yaml* file to communicate with the sidecar. - -3. Enable CSM for Authorization and provide *proxyHost* address - -4. Install the CSI PowerFlex driver -#### PowerMax - -Please refer to step 7 in the [installation steps for PowerMax](../../csidriver/installation/helm/powermax) to edit the parameters in *my-powermax-settings.yaml* to communicate with the sidecar. - -1. Update *endpoint* to match the endpoint set in samples/secret/karavi-authorization-config.json - -2. Enable CSM for Authorization and provide *proxyHost* address - -3. Install the CSI PowerMax driver - -#### PowerScale - -Please refer to step 5 in the [installation steps for PowerScale](../../csidriver/installation/helm/isilon) to edit the parameters in *my-isilon-settings.yaml* to communicate with the sidecar. - -1. Update *endpointPort* to match the endpoint port number set in samples/secret/karavi-authorization-config.json - -*Notes:* -> - In *my-isilon-settings.yaml*, endpointPort acts as a default value. If endpointPort is not specified in *my-isilon-settings.yaml*, then it should be specified in the *endpoint* parameter of samples/secret/secret.yaml. -> - The *isilon-creds* secret has a *mountEndpoint* parameter which must be set to the hostname or IP address of the PowerScale OneFS API server, for example, 10.0.0.1. - -2. Enable CSM for Authorization and provide *proxyHost* address - -Please refer to step 6 in the [installation steps for PowerScale](../../csidriver/installation/helm/isilon) to edit the parameters in samples/secret/secret.yaml file to communicate with the sidecar. - -3. Update *endpoint* to match the endpoint set in samples/secret/karavi-authorization-config.json - ->__Note__: Only add the endpoint port if it has not been set in *my-isilon-settings.yaml*. - -4. Create the isilon-creds secret using the following command: - - `kubectl create secret generic isilon-creds -n isilon --from-file=config=secret.yaml -o yaml --dry-run=client | kubectl apply -f -` - -5. Install the CSI PowerScale driver -## Updating CSM for Authorization Proxy Server Configuration - -CSM for Authorization has a subset of configuration parameters that can be updated dynamically: - -| Parameter | Type | Default | Description | -| --------- | ---- | ------- | ----------- | -| certificate.crtFile | String | "" |Path to the host certificate file | -| certificate.keyFile | String | "" |Path to the host private key file | -| certificate.rootCertificate | String | "" |Path to the root CA file | -| web.jwtsigningsecret | String | "secret" |The secret used to sign JWT tokens | - -Updating configuration parameters can be done by editing the `karavi-config-secret` on the CSM for the Authorization Server. The secret can be queried using k3s and kubectl like so: - -`k3s kubectl -n karavi get secret/karavi-config-secret` - -To update or add parameters, you must edit the base64 encoded data in the secret. The` karavi-config-secret` data can be decoded like so: - -`k3s kubectl -n karavi get secret/karavi-config-secret -o yaml | grep config.yaml | head -n 1 | awk '{print $2}' | base64 -d` - -Save the output to a file or copy it to an editor to make changes. Once you are done with the changes, you must encode the data to base64. If your changes are in a file, you can encode it like so: - -`cat | base64` - -Copy the new, encoded data and edit the `karavi-config-secret` with the new data. Run this command to edit the secret: - -`k3s kubectl -n karavi edit secret/karavi-config-secret` - -Replace the data in `config.yaml` under the `data` field with your new, encoded data. Save the changes and CSM for Authorization will read the changed secret. - ->__Note__: If you are updating the signing secret, the tenants need to be updated with new tokens via the `karavictl generate token` command like so. The `--insecure` flag is only necessary if certificates were not provided in `$HOME/.karavi/config.json` - -`karavictl generate token --tenant $TenantName --insecure --addr "grpc.${AuthorizationProxyHost}" | jq -r '.Token' > kubectl -n $namespace apply -f -` - -## CSM for Authorization Proxy Server Dynamic Configuration Settings - -Some settings are not stored in the `karavi-config-secret` but in the csm-config-params ConfigMap, such as LOG_LEVEL and LOG_FORMAT. To update the CSM for Authorization logging settings during runtime, run the below command on the K3s cluster, make your changes, and save the updated configmap data. - -``` -k3s kubectl -n karavi edit configmap/csm-config-params -``` - -This edit will not update the logging level for the sidecar-proxy containers running in the CSI Driver pods. To update the sidecar-proxy logging levels, you must update the associated CSI Driver ConfigMap in a similar fashion: - -``` -kubectl -n [CSM_CSI_DRVIER_NAMESPACE] edit configmap/-config-params -``` - -Using PowerFlex as an example, `kubectl -n vxflexos edit configmap/vxflexos-config-params` can be used to update the logging level of the sidecar-proxy and the driver. +Installation information for CSM Authorization can be found in this section. diff --git a/content/v1/authorization/deployment/helm/_index.md b/content/v1/authorization/deployment/helm/_index.md new file mode 100644 index 0000000000..76d0f47c1a --- /dev/null +++ b/content/v1/authorization/deployment/helm/_index.md @@ -0,0 +1,374 @@ +--- +title: Helm +linktitle: Helm +description: > + Dell Technologies (Dell) Container Storage Modules (CSM) for Authorization Helm deployment +--- + +CSM Authorization can be installed by using the provided Helm v3 charts on Kubernetes platforms. + +The following CSM Authorization components are installed in the specified namespace: +- proxy-service, which forwards requests from the CSI Driver to the backend storage array +- tenant-service, which configures tenants, role bindings, and generates JSON Web Tokens +- role-service, which configures roles for tenants to be bound to +- storage-service, which configures backend storage arrays for the proxy-server to foward requests to + +The folloiwng third-party components are installed in the specified namespace: +- redis, which stores data regarding tenants and their volume ownership, quota, and revokation status +- redis-commander, a web management tool for Redis + +The following third-party components are optionally installed in the specified namespace: +- cert-manager, which optionally provides a self-signed certificate to configure the CSM Authorization Ingresses +- nginx-ingress-controller, which fulfills the CSM Authorization Ingresses + +## Install CSM Authorization + +**Steps** +1. Run `git clone https://github.com/dell/helm-charts.git` to clone the git repository. + +2. Ensure that you have created a namespace where you want to install CSM Authorization. You can run `kubectl create namespace authorization` to create a new one. + +3. Prepare `samples/csm-authorization/config.yaml` which contains the JWT signing secret. The following table lists the configuration parameters. + + | Parameter | Description | Required | Default | + | --------- | ------------------------------------------------------------ | -------- | ------- | + | web.jwtsigningsecret | String used to sign JSON Web Tokens | true | secret |. + + Example: + + ```yaml + web: + jwtsigningsecret: randomString123 + ``` + + After editing the file, run the following command to create a secret called `karavi-config-secret`: + + `kubectl create secret generic karavi-config-secret -n authorization --from-file=config.yaml=samples/csm-authorization/config.yaml` + + Use the following command to replace or update the secret: + + `kubectl create secret generic karavi-config-secret -n authorization --from-file=config=samples/csm-authorization/config.yaml -o yaml --dry-run=client | kubectl replace -f -` + +4. Copy the default values.yaml file `cp charts/csm-authorization/values.yaml myvalues.yaml` + +5. Look over all the fields in `myvalues.yaml` and fill in/adjust any as needed. + +| Parameter | Description | Required | Default | +| ------------------------ | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | -------- | ------- | +| **ingress-nginx** | This section configures the enablement of the NGINX Ingress Controller. | - | - | +| enabled | Enable/Disable deployment of the NGINX Ingress Controller. Set to false if you already have an Ingress Controller installed. | No | true | +| **cert-manager** | This section configures the enablement of cert-manager. | - | - | +| enabled | Enable/Disable deployment of cert-manager. Set to false if you already have cert-manager installed. | No | true | +| **authorization** | This section configures the CSM-Authorization components. | - | - | +| authorization.images.proxyService | The image to use for the proxy-service. | Yes | dellemc/csm-authorization-proxy:nightly | +| authorization.images.tenantService | The image to use for the tenant-service. | Yes | dellemc/csm-authorization-tenant:nightly | +| authorization.images.roleService | The image to use for the role-service. | Yes | dellemc/csm-authorization-proxy:nightly | +| authorization.images.storageService | The image to use for the storage-service. | Yes | dellemc/csm-authorization-storage:nightly | +| authorization.images.opa | The image to use for Open Policy Agent. | Yes | openpolicyagent/opa | +| authorization.images.opaKubeMgmt | The image to use for Open Policy Agent kube-mgmt. | Yes | openpolicyagent/kube-mgmt:0.11 | +| authorization.hostname | The hostname to configure the self-signed certificate (if applicable) and the proxy, tenant, role, and storage service Ingresses. | Yes | csm-authorization.com | +| authorization.logLevel | CSM Authorization log level. Allowed values: “error”, “warn”/“warning”, “info”, “debug”. | Yes | debug | +| authorization.zipkin.collectoruri | The URI of the Zipkin instance to export traces. | No | - | +| authorization.zipkin.probability | The ratio of traces to export. | No | - | +| authorization.proxyServerIngress.ingressClassName | The ingressClassName of the proxy-service Ingress. | Yes | - | +| authorization.proxyServerIngress.hosts | Additional host rules to be applied to the proxy-service Ingress. | No | - | +| authorization.proxyServerIngress.annotations | Additional annotations for the proxy-service Ingress. | No | - | +| authorization.tenantServiceIngress.ingressClassName | The ingressClassName of the tenant-service Ingress. | Yes | - | +| authorization.tenantServiceIngress.hosts | Additional host rules to be applied to the tenant-service Ingress. | No | - | +| authorization.tenantServiceIngress.annotations | Additional annotations for the tenant-service Ingress. | No | - | +| authorization.roleServiceIngress.ingressClassName | The ingressClassName of the role-service Ingress. | Yes | - | +| authorization.roleServiceIngress.hosts | Additional host rules to be applied to the role-service Ingress. | No | - | +| authorization.roleServiceIngress.annotations | Additional annotations for the role-service Ingress. | No | - | +| authorization.storageServiceIngress.ingressClassName | The ingressClassName of the storage-service Ingress. | Yes | - | +| authorization.storageServiceIngress.hosts | Additional host rules to be applied to the storage-service Ingress. | No | - | +| authorization.storageServiceIngress.annotations | Additional annotations for the storage-service Ingress. | No | - | +| **redis** | This section configures Redis. | - | - | +| redis.images.redis | The image to use for Redis. | Yes | redis:6.0.8-alpine | +| redis.images.commander | The image to use for Redis Commander. | Yes | rediscommander/redis-commander:latest | +| redis.storageClass | The storage class for Redis to use for persistence. If not supplied, the default storage class is used. | No | - | + + *NOTE*: +- The tenant, role, and storage services use GRPC. If the Ingress Controller requires annotations to support GRPC, they must be supplied. + +6. Install the driver using `helm`: + +To install CSM Authorization with the service Ingresses using your own certificate, run: + +``` +helm -n authorization install authorization -f myvalues.yaml charts/csm-authorization \ +--set-file authorization.certificate= \ +--set-file authorization.privateKey= +``` + +To install CSM Authorization with the service Ingresses using a self-signed certificate generated via cert-manager, run: + +``` +helm -n authorization install authorization -f myvalues.yaml charts/csm-authorization +``` + +## Install Karavictl + +The Karavictl CLI can be obtained directly from the [GitHub repository's releases](https://github.com/dell/karavi-authorization/releases) section. + +In order to run `karavictl` commands, the binary needs to exist in your PATH, for example /usr/local/bin. + +Karavictl commands and intended use can be found [here](../../cli/). + +## Configuring the CSM Authorization Proxy Server + +The storage administrator must first configure the proxy server with the following: +- Storage systems +- Tenants +- Roles +- Role bindings + +This is done using `karavictl` to connect to the storage, tenant, and role services. In this example, we will be referencing an installation using `csm-authorization.com` as the authorization.hostname value and the NGINX Ingress Controller accessed via the cluster's master node. + +Run `kubectl -n authorization get ingress` and `kubectl -n authorization get service` to see the Ingress rules for these services and the exposed port for accessing these services via the LoadBalancer. For example: + +``` +# kubectl -n authorization get ingress +NAME CLASS HOSTS ADDRESS PORTS AGE +proxy-server nginx csm-authorization.com 80, 443 86s +role-service nginx role.csm-authorization.com 80, 443 86s +storage-service nginx storage.csm-authorization.com 80, 443 86s +tenant-service nginx tenant.csm-authorization.com 80, 443 86s + +# kubectl -n auth get service +NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE +authorization-cert-manager ClusterIP 10.104.35.150 9402/TCP 28s +authorization-cert-manager-webhook ClusterIP 10.97.179.94 443/TCP 27s +authorization-ingress-nginx-controller LoadBalancer 10.108.115.217 80:30080/TCP,443:30016/TCP 27s +authorization-ingress-nginx-controller-admission ClusterIP 10.103.143.215 443/TCP 27s +proxy-server ClusterIP 10.111.86.51 8080/TCP 28s +redis ClusterIP 10.111.158.17 6379/TCP 28s +redis-commander ClusterIP 10.107.22.41 8081/TCP 27s +role-service ClusterIP 10.96.113.230 50051/TCP 27s +storage-service ClusterIP 10.101.144.37 50051/TCP 27s +tenant-service ClusterIP 10.109.60.141 50051/TCP 28s +``` + +On the machine running `karavictl`, the `/etc/hosts` file needs to be updated with the Ingress hosts for the storage, tenant, and role services. For example: + +``` + tenant.csm-authorization.com + role.csm-authorization.com + storage.csm-authorization.com +``` + +The port that exposes these services is `30016`. + + +### Configure Storage + +A `storage` entity in CSM Authorization consists of the storage type (PowerFlex, PowerMax, PowerScale), the system ID, the API endpoint, and the credentials. For example, to create PowerFlex storage: + +``` +karavictl storage create --type powerflex --endpoint https://10.0.0.1 --system-id ${systemID} --user ${user} --password ${password} --insecure --array-insecure --addr storage.csm-authorization.com:30016 +``` + + *NOTE*: +- The `insecure` flag specifies to skip certificate validation when connecting to the CSM Authorization storage service. The `array-insecure` flag specifies to skip certificate validation when proxy-service connects to the backend storage array. Run `karavictl storage create --help` for help. + +### Configuring Tenants + +A `tenant` is a Kubernetes cluster that a role will be bound to. For example, to create a tenant named `Finance`: + +``` +karavictl tenant create --name Finance --insecure --addr tenant.csm-authorization.com:30016 +``` + + *NOTE*: +- The `insecure` flag specifies to skip certificate validation when connecting to the tenant service. Run `karavictl tenant create --help` for help. + +### Configuring Roles + +A `role` consists of a name, the storage to use, and the quota limit for the storage pool to be used. For example, to create a role named `FinanceRole` using the PowerFlex storage created above with a quota limit of 100GB in storage pool `myStoragePool`: + +``` +karavictl role create --insecure --addr role.csm-authorization.com:30016 --role=FinanceRole=powerflex=${systemID}=myStoragePool=100GB +``` + + *NOTE*: +- The `insecure` flag specifies to skip certificate validation when connecting to the role service. Run `karavictl role create --help` for help. + +### Configuring Role Bindings + +A `role binding` binds a role to a tenant. For example, to bind the `FinanceRole` to the `Finance` tenant: + +``` +karavictl rolebinding create --tenant Finance --role FinanceRole --insecure --addr tenant.csm-authorization.com:30016 +``` + + *NOTE*: +- The `insecure` flag specifies to skip certificate validation when connecting to the tenant service. Run `karavictl rolebinding create --help` for help. + +### Generating a Token + +Now that the tenant is bound to a role, a JSON Web Token can be generated for the tenant. For example, to generate a token for the `Finance` tenant: + +``` +karavictl generate token --tenant Finance --insecure --addr --addr tenant.csm-authorization.com:30016 + +{ + "Token": "\napiVersion: v1\nkind: Secret\nmetadata:\n name: proxy-authz-tokens\ntype: Opaque\ndata:\n access: ZXlKaGJHY2lPaUpJVXpJMU5pSXNJblI1Y0NJNklrcFhWQ0o5LmV5SmhkV1FpT2lKcllYSmhkbWtpTENKbGVIQWlPakUyTlRNek1qUXhPRFlzSW1keWIzVndJam9pWm05dklpd2lhWE56SWpvaVkyOXRMbVJsYkd3dWEyRnlZWFpwSWl3aWNtOXNaWE1pT2lKaVlYSWlMQ0p6ZFdJaU9pSnJZWEpoZG1rdGRHVnVZVzUwSW4wLmJIODN1TldmaHoxc1FVaDcweVlfMlF3N1NTVnEyRzRKeGlyVHFMWVlEMkU=\n refresh: ZXlKaGJHY2lPaUpJVXpJMU5pSXNJblI1Y0NJNklrcFhWQ0o5LmV5SmhkV1FpT2lKcllYSmhkbWtpTENKbGVIQWlPakUyTlRVNU1UWXhNallzSW1keWIzVndJam9pWm05dklpd2lhWE56SWpvaVkyOXRMbVJsYkd3dWEyRnlZWFpwSWl3aWNtOXNaWE1pT2lKaVlYSWlMQ0p6ZFdJaU9pSnJZWEpoZG1rdGRHVnVZVzUwSW4wLkxNbWVUSkZlX2dveXR0V0lUUDc5QWVaTy1kdmN5SHAwNUwyNXAtUm9ZZnM=\n" +} +``` + +With [jq](https://stedolan.github.io/jq/), you process the above response to filter the secret manifest. For example: + +``` +karavictl generate token --tenant Finance --insecure --addr --addr tenant.csm-authorization.com:30016 | jq -r '.Token' +apiVersion: v1 +kind: Secret +metadata: + name: proxy-authz-tokens +type: Opaque +data: + access: ZXlKaGJHY2lPaUpJVXpJMU5pSXNJblI1Y0NJNklrcFhWQ0o5LmV5SmhkV1FpT2lKcllYSmhkbWtpTENKbGVIQWlPakUyTlRNek1qUTFOekVzSW1keWIzVndJam9pWm05dklpd2lhWE56SWpvaVkyOXRMbVJsYkd3dWEyRnlZWFpwSWl3aWNtOXNaWE1pT2lKaVlYSWlMQ0p6ZFdJaU9pSnJZWEpoZG1rdGRHVnVZVzUwSW4wLk4tNE42Q1pPbUptcVQtRDF5ZkNGdEZqSmRDRjcxNlh1SXlNVFVyckNOS1U= + refresh: ZXlKaGJHY2lPaUpJVXpJMU5pSXNJblI1Y0NJNklrcFhWQ0o5LmV5SmhkV1FpT2lKcllYSmhkbWtpTENKbGVIQWlPakUyTlRVNU1UWTFNVEVzSW1keWIzVndJam9pWm05dklpd2lhWE56SWpvaVkyOXRMbVJsYkd3dWEyRnlZWFpwSWl3aWNtOXNaWE1pT2lKaVlYSWlMQ0p6ZFdJaU9pSnJZWEpoZG1rdGRHVnVZVzUwSW4wLkVxb3lXNld5ZEFLdU9mSmtkMkZaMk9TVThZMzlKUFc0YmhfNHc5R05ZNmM= +``` + +This secret must be applied in the driver namespace. Continue reading the next section for configuring the driver to use CSM Authorization. + +## Configuring a Dell CSI Driver with CSM for Authorization + +The second part of CSM for Authorization deployment is to configure one or more of the [supported](../../authorization#supported-csi-drivers) CSI drivers. This is controlled by the Kubernetes tenant admin. + +### Configuring a Dell CSI Driver + +Given a setup where Kubernetes, a storage system, and the CSM for Authorization Proxy Server are deployed, follow the steps below to configure the CSI Drivers to work with the Authorization sidecar: + +1. Apply the secret containing the token data into the driver namespace. It's assumed that the Kubernetes administrator has the token secret manifest saved in `/tmp/token.yaml`. + + ```console + # It is assumed that array type powermax has the namespace "powermax", powerflex has the namepace "vxflexos", and powerscale has the namespace "isilon". + kubectl apply -f /tmp/token.yaml -n powermax + kubectl apply -f /tmp/token.yaml -n vxflexos + kubectl apply -f /tmp/token.yaml -n isilon + ``` + +2. Edit the following parameters in samples/secret/karavi-authorization-config.json file in [CSI PowerFlex](https://github.com/dell/csi-powerflex/tree/main/samples), [CSI PowerMax](https://github.com/dell/csi-powermax/tree/main/samples/secret), or [CSI PowerScale](https://github.com/dell/csi-powerscale/tree/main/samples/secret) driver and update/add connection information for one or more backend storage arrays. In an instance where multiple CSI drivers are configured on the same Kubernetes cluster, the port range in the *endpoint* parameter must be different for each driver. + + | Parameter | Description | Required | Default | + | --------- | ----------- | -------- |-------- | + | username | Username for connecting to the backend storage array. This parameter is ignored. | No | - | + | password | Password for connecting to to the backend storage array. This parameter is ignored. | No | - | + | intendedEndpoint | HTTPS REST API endpoint of the backend storage array. | Yes | - | + | endpoint | HTTPS localhost endpoint that the authorization sidecar will listen on. | Yes | https://localhost:9400 | + | systemID | System ID of the backend storage array. | Yes | " " | + | insecure | A boolean that enables/disables certificate validation of the backend storage array. This parameter is not used. | No | true | + | isDefault | A boolean that indicates if the array is the default array. This parameter is not used. | No | default value from values.yaml | + + +Create the karavi-authorization-config secret using the following command: + +`kubectl -n [CSI_DRIVER_NAMESPACE] create secret generic karavi-authorization-config --from-file=config=samples/secret/karavi-authorization-config.json -o yaml --dry-run=client | kubectl apply -f -` + +>__Note__: +> - Create the driver secret as you would normally except update/add the connection information for communicating with the sidecar instead of the backend storage array and scrub the username and password +> - For PowerScale, the *systemID* will be the *clusterName* of the array. +> - The *isilon-creds* secret has a *mountEndpoint* parameter which must be set to the hostname or IP address of the PowerScale OneFS API server, for example, 10.0.0.1. +3. Create the proxy-server-root-certificate secret. + + If running in *insecure* mode, create the secret with empty data: + + `kubectl -n [CSI_DRIVER_NAMESPACE] create secret generic proxy-server-root-certificate --from-literal=rootCertificate.pem= -o yaml --dry-run=client | kubectl apply -f -` + + Otherwise, create the proxy-server-root-certificate secret with the appropriate file: + + `kubectl -n [CSI_DRIVER_NAMESPACE] create secret generic proxy-server-root-certificate --from-file=rootCertificate.pem=/path/to/rootCA -o yaml --dry-run=client | kubectl apply -f -` + + +>__Note__: Follow the steps below for additional configurations to one or more of the supported CSI drivers. +#### PowerFlex + +Please refer to step 5 in the [installation steps for PowerFlex](../../../csidriver/installation/helm/powerflex) to edit the parameters in samples/config.yaml file to communicate with the sidecar. + +1. Update *endpoint* to match the endpoint set in samples/secret/karavi-authorization-config.json + +2. Create vxflexos-config secret using the following command: + + `kubectl create secret generic vxflexos-config -n vxflexos --from-file=config=config.yaml -o yaml --dry-run=client | kubectl apply -f -` + +Please refer to step 9 in the [installation steps for PowerFlex](../../../csidriver/installation/helm/powerflex) to edit the parameters in *myvalues.yaml* file to communicate with the sidecar. + +3. Enable CSM for Authorization and provide *proxyHost* address + +4. Install the CSI PowerFlex driver +#### PowerMax + +Please refer to step 7 in the [installation steps for PowerMax](../../../csidriver/installation/helm/powermax) to edit the parameters in *my-powermax-settings.yaml* to communicate with the sidecar. + +1. Update *endpoint* to match the endpoint set in samples/secret/karavi-authorization-config.json + +2. Enable CSM for Authorization and provide *proxyHost* address + +3. Install the CSI PowerMax driver + +#### PowerScale + +Please refer to step 5 in the [installation steps for PowerScale](../../../csidriver/installation/helm/isilon) to edit the parameters in *my-isilon-settings.yaml* to communicate with the sidecar. + +1. Update *endpointPort* to match the endpoint port number set in samples/secret/karavi-authorization-config.json + +*Notes:* +> - In *my-isilon-settings.yaml*, endpointPort acts as a default value. If endpointPort is not specified in *my-isilon-settings.yaml*, then it should be specified in the *endpoint* parameter of samples/secret/secret.yaml. +> - The *isilon-creds* secret has a *mountEndpoint* parameter which must be set to the hostname or IP address of the PowerScale OneFS API server, for example, 10.0.0.1. + +2. Enable CSM for Authorization and provide *proxyHost* address + +Please refer to step 6 in the [installation steps for PowerScale](../../../csidriver/installation/helm/isilon) to edit the parameters in samples/secret/secret.yaml file to communicate with the sidecar. + +3. Update *endpoint* to match the endpoint set in samples/secret/karavi-authorization-config.json + +>__Note__: Only add the endpoint port if it has not been set in *my-isilon-settings.yaml*. + +4. Create the isilon-creds secret using the following command: + + `kubectl create secret generic isilon-creds -n isilon --from-file=config=secret.yaml -o yaml --dry-run=client | kubectl apply -f -` + +5. Install the CSI PowerScale driver +## Updating CSM for Authorization Proxy Server Configuration + +CSM for Authorization has a subset of configuration parameters that can be updated dynamically: + +| Parameter | Type | Default | Description | +| --------- | ---- | ------- | ----------- | +| web.jwtsigningsecret | String | "secret" |The secret used to sign JWT tokens | + +Updating configuration parameters can be done by editing the `karavi-config-secret`. The secret can be queried using k3s and kubectl like so: + +`kubectl -n authorization get secret/karavi-config-secret` + +To update parameters, you must edit the base64 encoded data in the secret. The` karavi-config-secret` data can be decoded like so: + +`kubectl -n authorization get secret/karavi-config-secret -o yaml | grep config.yaml | head -n 1 | awk '{print $2}' | base64 -d` + +Save the output to a file or copy it to an editor to make changes. Once you are done with the changes, you must encode the data to base64. If your changes are in a file, you can encode it like so: + +`cat | base64` + +Copy the new, encoded data and edit the `karavi-config-secret` with the new data. Run this command to edit the secret: + +`kubectl -n karavi edit secret/karavi-config-secret` + +Replace the data in `config.yaml` under the `data` field with your new, encoded data. Save the changes and CSM Authorization will read the changed secret. + +>__Note__: If you are updating the signing secret, the tenants need to be updated with new tokens via the `karavictl generate token` command. + +## CSM for Authorization Proxy Server Dynamic Configuration Settings + +Some settings are not stored in the `karavi-config-secret` but in the csm-config-params ConfigMap, such as LOG_LEVEL and LOG_FORMAT. To update the CSM Authorization logging settings during runtime, run the below command, make your changes, and save the updated configMap data. + +``` +kubectl -n authorization edit configmap/csm-config-params +``` + +This edit will not update the logging level for the sidecar-proxy containers running in the CSI Driver pods. To update the sidecar-proxy logging levels, you must update the associated CSI Driver ConfigMap in a similar fashion: + +``` +kubectl -n [CSM_CSI_DRVIER_NAMESPACE] edit configmap/-config-params +``` + +Using PowerFlex as an example, `kubectl -n vxflexos edit configmap/vxflexos-config-params` can be used to update the logging level of the sidecar-proxy and the driver. \ No newline at end of file diff --git a/content/v1/authorization/deployment/rpm/_index.md b/content/v1/authorization/deployment/rpm/_index.md new file mode 100644 index 0000000000..3c037dad45 --- /dev/null +++ b/content/v1/authorization/deployment/rpm/_index.md @@ -0,0 +1,349 @@ +--- +title: RPM +linktitle: RPM +weight: 2 +description: > + Dell Technologies (Dell) Container Storage Modules (CSM) for Authorization RPM deployment +--- + +This section outlines the deployment steps for Container Storage Modules (CSM) for Authorization. The deployment of CSM for Authorization is handled in 2 parts: +- Deploying the CSM for Authorization proxy server, to be controlled by storage administrators +- Configuring one to many [supported](../../../authorization#supported-csi-drivers) Dell CSI drivers with CSM for Authorization + +## Prerequisites + +The CSM for Authorization proxy server requires a Linux host with the following minimum resource allocations: +- 32 GB of memory +- 4 CPU +- 200 GB local storage + +These packages need to be installed on the Linux host: +- container-selinux +- https://rpm.rancher.io/k3s/stable/common/centos/7/noarch/k3s-selinux-0.4-1.el7.noarch.rpm + +## Deploying the CSM Authorization Proxy Server + +The first part of deploying CSM for Authorization is installing the proxy server. This activity and the administration of the proxy server will be owned by the storage administrator. + +The CSM for Authorization proxy server is installed using a single binary installer. + +If CSM for Authorization is being installed on a system where SELinux is enabled, you must ensure the proper SELinux policies have been installed. + +### Single Binary Installer + +The easiest way to obtain the single binary installer RPM is directly from the [GitHub repository's releases](https://github.com/dell/karavi-authorization/releases) section. + +Alternatively, the single binary installer can be built from source by cloning the [GitHub repository](https://github.com/dell/karavi-authorization) and using the following Makefile targets to build the installer: + +``` +make dist build-installer rpm +``` + +The `build-installer` step creates a binary at `karavi-authorization/bin/deploy` and embeds all components required for installation. The `rpm` step generates an RPM package and stores it at `karavi-authorization/deploy/rpm/x86_64/`. +This allows CSM for Authorization to be installed in network-restricted environments. + +A Storage Administrator can execute the installer or rpm package as a root user or via `sudo`. + +### Installing the RPM + +1. Before installing the rpm, some network and security configuration inputs need to be provided in json format. The json file should be created in the location `$HOME/.karavi/config.json` having the following contents: + + ```json + { + "web": { + "jwtsigningsecret": "secret" + }, + "proxy": { + "host": ":8080" + }, + "zipkin": { + "collectoruri": "http://DNS-hostname:9411/api/v2/spans", + "probability": 1 + }, + "certificate": { + "keyFile": "path_to_private_key_file", + "crtFile": "path_to_host_cert_file", + "rootCertificate": "path_to_root_CA_file" + }, + "hostname": "DNS-hostname" + } + ``` + + In an instance where a secure deployment is not required, an insecure deployment is possible. Please note that self-signed certificates will be created for you using cert-manager to allow TLS encryption for communication on the CSM for Authorization proxy server. However, this is not recommended for production environments. For an insecure deployment, the json file in the location `$HOME/.karavi/config.json` only requires the following contents: + + ```json + { + "hostname": "DNS-hostname" + } + ``` + +>__Note__: +> - `DNS-hostname` refers to the hostname of the system in which the CSM for Authorization server will be installed. This hostname can be found by running `nslookup ` +> - There are a number of ways to create certificates. In a production environment, certificates are usually created and managed by an IT administrator. Otherwise, certificates can be created using OpenSSL. + +2. In order to configure secure grpc connectivity, an additional subdomain in the format `grpc.DNS-hostname` is also required. All traffic from `grpc.DNS-hostname` needs to be routed to `DNS-hostname` address, this can be configured by adding a new DNS entry for `grpc.DNS-hostname` or providing a temporary path in the systems `/etc/hosts` file. + +>__Note__: The certificate provided in `crtFile` should be valid for both the `DNS-hostname` and the `grpc.DNS-hostname` address. + + For example, create the certificate config file with alternate names (to include DNS-hostname and grpc.DNS-hostname) and then create the .crt file: + + ``` + CN = DNS-hostname + subjectAltName = @alt_names + [alt_names] + DNS.1 = grpc.DNS-hostname.com + + $ openssl x509 -req -in cert_request_file.csr -CA root_CA.pem -CAkey private_key_File.key -CAcreateserial -out DNS-hostname.com.crt -days 365 -sha256 + ``` + +3. To install the rpm package on the system, run the below command: + + ```shell + rpm -ivh + ``` + +4. After installation, application data will be stored on the system under `/var/lib/rancher/k3s/storage/`. + +If errors occur during installation, review the [Troubleshooting](../../troubleshooting) section. + +## Configuring the CSM for Authorization Proxy Server + +The storage administrator must first configure the proxy server with the following: +- Storage systems +- Tenants +- Roles +- Bind roles to tenants + +Run the following commands on the Authorization proxy server: +>__Note__: The `--insecure` flag is only necessary if certificates were not provided in `$HOME/.karavi/config.json`. + + ```console + # Specify any desired name + export RoleName="" + export RoleQuota="" + export TenantName="" + + # Specify info about Array1 + export Array1Type="" + export Array1SystemID="" + export Array1User="" + export Array1Password="" + export Array1Pool="" + export Array1Endpoint="" + + # Specify info about Array2 + export Array2Type="" + export Array2SystemID="" + export Array2User="" + export Array2Password="" + export Array2Pool="" + export Array2Endpoint="" + + # Specify IPs + export DriverHostVMIP="" + export DriverHostVMPassword="" + export DriverHostVMUser="" + + # Specify Authorization proxy host address. NOTE: this is not the same as IP + export AuthorizationProxyHost="" + + echo === Creating Storage(s) === + # Add array1 to authorization + karavictl storage create \ + --type ${Array1Type} \ + --endpoint ${Array1Endpoint} \ + --system-id ${Array1SystemID} \ + --user ${Array1User} \ + --password ${Array1Password} \ + --array-insecure + + # Add array2 to authorization + karavictl storage create \ + --type ${Array2Type} \ + --endpoint ${Array2Endpoint} \ + --system-id ${Array2SystemID} \ + --user ${Array2User} \ + --password ${Array2Password} \ + --array-insecure + + echo === Creating Tenant === + karavictl tenant create -n $TenantName --insecure --addr "grpc.${AuthorizationProxyHost}" + + echo === Creating Role === + karavictl role create \ + --role=${RoleName}=${Array1Type}=${Array1SystemID}=${Array1Pool}=${RoleQuota} \ + --role=${RoleName}=${Array2Type}=${Array2SystemID}=${Array2Pool}=${RoleQuota} + + echo === === Binding Role === + karavictl rolebinding create --tenant $TenantName --role $RoleName --insecure --addr "grpc.${AuthorizationProxyHost}" + ``` + +### Generate a Token + +After creating the role bindings, the next logical step is to generate the access token. The storage admin is responsible for generating and sending the token to the Kubernetes tenant admin. + +>__Note__: +> - The `--insecure` flag is only necessary if certificates were not provided in `$HOME/.karavi/config.json`. +> - This sample copies the token directly to the Kubernetes cluster master node. The requirement here is that the token must be copied and/or stored in any location accessible to the Kubernetes tenant admin. + + ``` + echo === Generating token === + karavictl generate token --tenant $TenantName --insecure --addr "grpc.${AuthorizationProxyHost}" | jq -r '.Token' > token.yaml + + echo === Copy token to Driver Host === + sshpass -p ${DriverHostPassword} scp token.yaml ${DriverHostVMUser}@{DriverHostVMIP}:/tmp/token.yaml + ``` + +### Copy the karavictl Binary to the Kubernetes Master Node + +The karavictl binary is available from the CSM for Authorization proxy server. This needs to be copied to the Kubernetes master node for Kubernetes tenant admins so the Kubernetes tenant admins can configure the Dell CSI driver with CSM for Authorization. + +``` +sshpass -p ${DriverHostPassword} scp bin/karavictl root@{DriverHostVMIP}:/tmp/karavictl +``` + +>__Note__: The storage admin is responsible for copying the binary to a location accessible by the Kubernetes tenant admin. + +## Configuring a Dell CSI Driver with CSM for Authorization + +The second part of CSM for Authorization deployment is to configure one or more of the [supported](../../../authorization#supported-csi-drivers) CSI drivers. This is controlled by the Kubernetes tenant admin. + +### Configuring a Dell CSI Driver + +Given a setup where Kubernetes, a storage system, and the CSM for Authorization Proxy Server are deployed, follow the steps below to configure the CSI Drivers to work with the Authorization sidecar: + +1. Create the secret token in the namespace of the driver. + + ```console + # It is assumed that array type powermax has the namespace "powermax", powerflex has the namepace "vxflexos", and powerscale has the namespace "isilon". + kubectl apply -f /tmp/token.yaml -n powermax + kubectl apply -f /tmp/token.yaml -n vxflexos + kubectl apply -f /tmp/token.yaml -n isilon + ``` + +2. Edit the following parameters in samples/secret/karavi-authorization-config.json file in [CSI PowerFlex](https://github.com/dell/csi-powerflex/tree/main/samples), [CSI PowerMax](https://github.com/dell/csi-powermax/tree/main/samples/secret), or [CSI PowerScale](https://github.com/dell/csi-powerscale/tree/main/samples/secret) driver and update/add connection information for one or more backend storage arrays. In an instance where multiple CSI drivers are configured on the same Kubernetes cluster, the port range in the *endpoint* parameter must be different for each driver. + + | Parameter | Description | Required | Default | + | --------- | ----------- | -------- |-------- | + | username | Username for connecting to the backend storage array. This parameter is ignored. | No | - | + | password | Password for connecting to to the backend storage array. This parameter is ignored. | No | - | + | intendedEndpoint | HTTPS REST API endpoint of the backend storage array. | Yes | - | + | endpoint | HTTPS localhost endpoint that the authorization sidecar will listen on. | Yes | https://localhost:9400 | + | systemID | System ID of the backend storage array. | Yes | " " | + | insecure | A boolean that enables/disables certificate validation of the backend storage array. This parameter is not used. | No | true | + | isDefault | A boolean that indicates if the array is the default array. This parameter is not used. | No | default value from values.yaml | + + +Create the karavi-authorization-config secret using the following command: + +`kubectl -n [CSI_DRIVER_NAMESPACE] create secret generic karavi-authorization-config --from-file=config=samples/secret/karavi-authorization-config.json -o yaml --dry-run=client | kubectl apply -f -` + +>__Note__: +> - Create the driver secret as you would normally except update/add the connection information for communicating with the sidecar instead of the backend storage array and scrub the username and password +> - For PowerScale, the *systemID* will be the *clusterName* of the array. +> - The *isilon-creds* secret has a *mountEndpoint* parameter which must be set to the hostname or IP address of the PowerScale OneFS API server, for example, 10.0.0.1. +3. Create the proxy-server-root-certificate secret. + + If running in *insecure* mode, create the secret with empty data: + + `kubectl -n [CSI_DRIVER_NAMESPACE] create secret generic proxy-server-root-certificate --from-literal=rootCertificate.pem= -o yaml --dry-run=client | kubectl apply -f -` + + Otherwise, create the proxy-server-root-certificate secret with the appropriate file: + + `kubectl -n [CSI_DRIVER_NAMESPACE] create secret generic proxy-server-root-certificate --from-file=rootCertificate.pem=/path/to/rootCA -o yaml --dry-run=client | kubectl apply -f -` + + +>__Note__: Follow the steps below for additional configurations to one or more of the supported CSI drivers. +#### PowerFlex + +Please refer to step 5 in the [installation steps for PowerFlex](../../../csidriver/installation/helm/powerflex) to edit the parameters in samples/config.yaml file to communicate with the sidecar. + +1. Update *endpoint* to match the endpoint set in samples/secret/karavi-authorization-config.json + +2. Create vxflexos-config secret using the following command: + + `kubectl create secret generic vxflexos-config -n vxflexos --from-file=config=config.yaml -o yaml --dry-run=client | kubectl apply -f -` + +Please refer to step 9 in the [installation steps for PowerFlex](../../../csidriver/installation/helm/powerflex) to edit the parameters in *myvalues.yaml* file to communicate with the sidecar. + +3. Enable CSM for Authorization and provide *proxyHost* address + +4. Install the CSI PowerFlex driver +#### PowerMax + +Please refer to step 7 in the [installation steps for PowerMax](../../../csidriver/installation/helm/powermax) to edit the parameters in *my-powermax-settings.yaml* to communicate with the sidecar. + +1. Update *endpoint* to match the endpoint set in samples/secret/karavi-authorization-config.json + +2. Enable CSM for Authorization and provide *proxyHost* address + +3. Install the CSI PowerMax driver + +#### PowerScale + +Please refer to step 5 in the [installation steps for PowerScale](../../../csidriver/installation/helm/isilon) to edit the parameters in *my-isilon-settings.yaml* to communicate with the sidecar. + +1. Update *endpointPort* to match the endpoint port number set in samples/secret/karavi-authorization-config.json + +*Notes:* +> - In *my-isilon-settings.yaml*, endpointPort acts as a default value. If endpointPort is not specified in *my-isilon-settings.yaml*, then it should be specified in the *endpoint* parameter of samples/secret/secret.yaml. +> - The *isilon-creds* secret has a *mountEndpoint* parameter which must be set to the hostname or IP address of the PowerScale OneFS API server, for example, 10.0.0.1. + +2. Enable CSM for Authorization and provide *proxyHost* address + +Please refer to step 6 in the [installation steps for PowerScale](../../../csidriver/installation/helm/isilon) to edit the parameters in samples/secret/secret.yaml file to communicate with the sidecar. + +3. Update *endpoint* to match the endpoint set in samples/secret/karavi-authorization-config.json + +>__Note__: Only add the endpoint port if it has not been set in *my-isilon-settings.yaml*. + +4. Create the isilon-creds secret using the following command: + + `kubectl create secret generic isilon-creds -n isilon --from-file=config=secret.yaml -o yaml --dry-run=client | kubectl apply -f -` + +5. Install the CSI PowerScale driver +## Updating CSM for Authorization Proxy Server Configuration + +CSM for Authorization has a subset of configuration parameters that can be updated dynamically: + +| Parameter | Type | Default | Description | +| --------- | ---- | ------- | ----------- | +| web.jwtsigningsecret | String | "secret" |The secret used to sign JWT tokens | + +Updating configuration parameters can be done by editing the `karavi-config-secret` on the CSM for the Authorization Server. The secret can be queried using k3s and kubectl like so: + +`k3s kubectl -n karavi get secret/karavi-config-secret` + +To update or add parameters, you must edit the base64 encoded data in the secret. The` karavi-config-secret` data can be decoded like so: + +`k3s kubectl -n karavi get secret/karavi-config-secret -o yaml | grep config.yaml | head -n 1 | awk '{print $2}' | base64 -d` + +Save the output to a file or copy it to an editor to make changes. Once you are done with the changes, you must encode the data to base64. If your changes are in a file, you can encode it like so: + +`cat | base64` + +Copy the new, encoded data and edit the `karavi-config-secret` with the new data. Run this command to edit the secret: + +`k3s kubectl -n karavi edit secret/karavi-config-secret` + +Replace the data in `config.yaml` under the `data` field with your new, encoded data. Save the changes and CSM for Authorization will read the changed secret. + +>__Note__: If you are updating the signing secret, the tenants need to be updated with new tokens via the `karavictl generate token` command like so. The `--insecure` flag is only necessary if certificates were not provided in `$HOME/.karavi/config.json` + +`karavictl generate token --tenant $TenantName --insecure --addr "grpc.${AuthorizationProxyHost}" | jq -r '.Token' > kubectl -n $namespace apply -f -` + +## CSM for Authorization Proxy Server Dynamic Configuration Settings + +Some settings are not stored in the `karavi-config-secret` but in the csm-config-params ConfigMap, such as LOG_LEVEL and LOG_FORMAT. To update the CSM for Authorization logging settings during runtime, run the below command on the K3s cluster, make your changes, and save the updated configmap data. + +``` +k3s kubectl -n karavi edit configmap/csm-config-params +``` + +This edit will not update the logging level for the sidecar-proxy containers running in the CSI Driver pods. To update the sidecar-proxy logging levels, you must update the associated CSI Driver ConfigMap in a similar fashion: + +``` +kubectl -n [CSM_CSI_DRVIER_NAMESPACE] edit configmap/-config-params +``` + +Using PowerFlex as an example, `kubectl -n vxflexos edit configmap/vxflexos-config-params` can be used to update the logging level of the sidecar-proxy and the driver. diff --git a/content/v1/authorization/release/_index.md b/content/v1/authorization/release/_index.md new file mode 100644 index 0000000000..9e877ab1b9 --- /dev/null +++ b/content/v1/authorization/release/_index.md @@ -0,0 +1,24 @@ +--- +title: "Release notes" +linkTitle: "Release notes" +weight: 6 +Description: > + Dell Container Storage Modules (CSM) release notes for authorization +--- + +## Release Notes - CSM Authorization 1.3.0 + +### New Features/Changes + +- [CSM-Authorization can deployed with helm](https://github.com/dell/csm/issues/261) + +### Fixed Issues + +- [Authorization proxy server install fails due to missing container-selinux](https://github.com/dell/csm/issues/313) +- [Permissions on karavictl and k3s binaries are incorrect](https://github.com/dell/csm/issues/277) + + + +### Known Issues + +- [Authorization NGINX Ingress Controller fails to install on OpenShift](https://github.com/dell/csm/issues/317) \ No newline at end of file diff --git a/content/v1/authorization/troubleshooting.md b/content/v1/authorization/troubleshooting.md index 0a47cb4ec8..4792dc36ac 100644 --- a/content/v1/authorization/troubleshooting.md +++ b/content/v1/authorization/troubleshooting.md @@ -6,7 +6,14 @@ Description: > Troubleshooting guide --- +## RPM Deployment - [Running `karavictl tenant` commands result in an HTTP 504 error](#running-karavictl-tenant-commands-result-in-an-http-504-error) +- [Installation fails to install policies](#installation-fails-to-install-policies) +- [After installation, the create-pvc Pod is in an Error state](#after-installation-the-create-pvc-pod-is-in-an-error-state) + +## Helm Deployment +- [The CSI Driver for Dell PowerFlex v2.3.0 is in an Error or CrashLoopBackoff state due to "request denied for path" errors](#the-csi-driver-for-dell-powerflex-v230-is-in-an-error-or-crashloopbackoff-state-due-to-request-denied-for-path-errors) + --- ### Retrieve CSM Authorization Server Logs @@ -35,4 +42,126 @@ $ karavictl tenant list --addr __Resolution__ Consult with your system administrator or Iptables/firewall documentation. If there are rules in place to -prevent communication with the ``, either new rules must be created or existing rules must be updated. \ No newline at end of file +prevent communication with the ``, either new rules must be created or existing rules must be updated. + +### Installation fails to install policies +If SELinux is enabled, the policies may fail to install: + +``` +error: failed to install policies (see /tmp/policy-install-for-karavi3163047435): exit status 1 +``` + +__Resolution__ + +View the contents /tmp/policy-install-for-karavi* file listed in the error message. If there is a Permission denied error while running the policy-install.sh script, manually run the script to install policies. + +``` +$ cat /tmp/policy-install-for-karavi3163047435 + +# find the location of the policy-install.sh script located in the file and manually run the script + +$ /tmp/karavi-installer-2908017483/policy-install.sh +``` + +### After installation, the create-pvc Pod is in an Error state +If SELinux is enabled, the create-pvc Pod may be in an Error state: + +``` +kube-system create-pvc-44a763c7-e70f-4e32-a114-e94615041042 0/1 Error 0 102s +``` + +__Resolution__ + +Run the following commands to allow the PVC to be created: +``` +$ semanage fcontext -a -t container_file_t "/var/lib/rancher/k3s/storage(/.*)?" +$ restorecon -R /var/lib/rancher/k3s/storage/ +``` + +### The CSI Driver for Dell PowerFlex v2.3.0 is in an Error or CrashLoopBackoff state due to "request denied for path" errors +The vxflexos-controller pods will have logs similar to: +``` +time="2022-06-30T17:35:03Z" level=error msg="failed to list vols for array 2d6fb7c6370a990f : rpc error: code = Internal desc = Unable to list volumes: request denied for path " error="rpc error: code = Internal desc = Unable to list volumes: request denied for path" +time="2022-06-30T17:35:03Z" level=error msg="array 2d6fb7c6370a990f probe failed: failed to list vols for array 2d6fb7c6370a990f : rpc error: code = Internal desc = Unable to list volumes: request denied for path " +... +time="2022-06-30T17:35:03Z" level=fatal msg="grpc failed" error="rpc error: code = FailedPrecondition desc = All arrays are not working. Could not proceed further: map[2d6fb7c6370a990f:failed to list vols for array 2d6fb7c6370a990f : rpc error: code = Internal desc = Unable to list volumes: request denied for path ]" +``` + +The vxflexos-node pods will have logs similar to: +``` +time="2022-06-30T17:38:32Z" level=error msg="failed to list vols for array 2d6fb7c6370a990f : rpc error: code = Internal desc = Unable to list volumes: request denied for path " error="rpc error: code = Internal desc = Unable to list volumes: request denied for path" +time="2022-06-30T17:38:32Z" level=error msg="array 2d6fb7c6370a990f probe failed: failed to list vols for array 2d6fb7c6370a990f : rpc error: code = Internal desc = Unable to list volumes: request denied for path " +... +time="2022-06-30T17:38:32Z" level=fatal msg="grpc failed" error="rpc error: code = FailedPrecondition desc = All arrays are not working. Could not proceed further: map[2d6fb7c6370a990f:failed to list vols for array 2d6fb7c6370a990f : rpc error: code = Internal desc = Unable to list volumes: request denied for path ]" +``` + +This occurs when the CSM Authorization proxy-server does not allow all driver HTTPS request paths. + +__Resolution__ + +1. Edit the `powerflex-urls` configMap in the namespace where CSM Authorization is deployed to allow all request paths by default. + +``` +kubectl -n edit configMap powerflex-urls +``` + +In the `data` field, navigate towards the bottom of this field where you see `default allow = false`. This is highlighted in **bold** in the example below. Replace `false` with `true` and save the edit. + +

+data:
+  url.rego: "# Copyright © 2022 Dell Inc., or its subsidiaries. All Rights Reserved.\n#\n#
+    Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not
+    use this file except in compliance with the License.\n# You may obtain a copy
+    of the License at\n#\n#     http:#www.apache.org/licenses/LICENSE-2.0\n#\n# Unless
+    required by applicable law or agreed to in writing, software\n# distributed under
+    the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS
+    OF ANY KIND, either express or implied.\n# See the License for the specific language
+    governing permissions and\n# limitations under the License.\n\npackage karavi.authz.url\n\nallowlist
+    = [\n    \"GET /api/login/\",\n\t\t\"POST /proxy/refresh-token/\",\n\t\t\"GET
+    /api/version/\",\n\t\t\"GET /api/types/System/instances/\",\n\t\t\"GET /api/types/StoragePool/instances/\",\n\t\t\"POST
+    /api/types/Volume/instances/\",\n\t\t\"GET /api/instances/Volume::[a-f0-9]+/$\",\n\t\t\"POST
+    /api/types/Volume/instances/action/queryIdByKey/\",\n\t\t\"GET /api/instances/System::[a-f0-9]+/relationships/Sdc/\",\n\t\t\"GET
+    /api/instances/Sdc::[a-f0-9]+/relationships/Statistics/\",\n\t\t\"GET /api/instances/Sdc::[a-f0-9]+/relationships/Volume/\",\n\t\t\"GET
+    /api/instances/Volume::[a-f0-9]+/relationships/Statistics/\",\n\t\t\"GET /api/instances/StoragePool::[a-f0-9]+/relationships/Statistics/\",\n\t\t\"POST
+    /api/instances/Volume::[a-f0-9]+/action/addMappedSdc/\",\n\t\t\"POST /api/instances/Volume::[a-f0-9]+/action/removeMappedSdc/\",\n\t\t\"POST
+    /api/instances/Volume::[a-f0-9]+/action/removeVolume/\"\n]\n\ndefault allow =
+    false\nallow {\n\tregex.match(allowlist[_], sprintf(\"%s %s\", [input.method,
+    input.url]))\n}\n"
+
+ +Edited data: + +

+data:
+  url.rego: "# Copyright © 2022 Dell Inc., or its subsidiaries. All Rights Reserved.\n#\n#
+    Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not
+    use this file except in compliance with the License.\n# You may obtain a copy
+    of the License at\n#\n#     http:#www.apache.org/licenses/LICENSE-2.0\n#\n# Unless
+    required by applicable law or agreed to in writing, software\n# distributed under
+    the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS
+    OF ANY KIND, either express or implied.\n# See the License for the specific language
+    governing permissions and\n# limitations under the License.\n\npackage karavi.authz.url\n\nallowlist
+    = [\n    \"GET /api/login/\",\n\t\t\"POST /proxy/refresh-token/\",\n\t\t\"GET
+    /api/version/\",\n\t\t\"GET /api/types/System/instances/\",\n\t\t\"GET /api/types/StoragePool/instances/\",\n\t\t\"POST
+    /api/types/Volume/instances/\",\n\t\t\"GET /api/instances/Volume::[a-f0-9]+/$\",\n\t\t\"POST
+    /api/types/Volume/instances/action/queryIdByKey/\",\n\t\t\"GET /api/instances/System::[a-f0-9]+/relationships/Sdc/\",\n\t\t\"GET
+    /api/instances/Sdc::[a-f0-9]+/relationships/Statistics/\",\n\t\t\"GET /api/instances/Sdc::[a-f0-9]+/relationships/Volume/\",\n\t\t\"GET
+    /api/instances/Volume::[a-f0-9]+/relationships/Statistics/\",\n\t\t\"GET /api/instances/StoragePool::[a-f0-9]+/relationships/Statistics/\",\n\t\t\"POST
+    /api/instances/Volume::[a-f0-9]+/action/addMappedSdc/\",\n\t\t\"POST /api/instances/Volume::[a-f0-9]+/action/removeMappedSdc/\",\n\t\t\"POST
+    /api/instances/Volume::[a-f0-9]+/action/removeVolume/\"\n]\n\ndefault allow =
+    true\nallow {\n\tregex.match(allowlist[_], sprintf(\"%s %s\", [input.method,
+    input.url]))\n}\n"
+
+ +2. Rollout restart the CSM Authorization proxy-server so the policy change gets applied. + +``` +kubectl -n rollout restart deploy/proxy-server +``` + +3. Optionally, rollout restart the CSI Driver for Dell PowerFlex to restart the driver pods. Alternatively, wait for the Kubernetes CrashLoopBackoff behavior to restart the driver. + +``` +kubectl -n rollout restart deploy/vxflexos-controller +kubectl -n rollout restart daemonSet/vxflexos-node +``` diff --git a/content/v1/csidriver/_index.md b/content/v1/csidriver/_index.md index 495c29b500..732f364787 100644 --- a/content/v1/csidriver/_index.md +++ b/content/v1/csidriver/_index.md @@ -14,16 +14,16 @@ The CSI Drivers by Dell implement an interface between [CSI](https://kubernetes- ### Supported Operating Systems/Container Orchestrator Platforms {{}} -| | PowerMax | PowerFlex | Unity | PowerScale | PowerStore | +| | PowerMax | PowerFlex | Unity XT | PowerScale | PowerStore | |---------------|:----------------:|:-------------------:|:----------------:|:-----------------:|:----------------:| -| Kubernetes | 1.21, 1.22, 1.23 | 1.21, 1.22, 1.23 | 1.21, 1.22, 1.23 | 1.21, 1.22, 1.23 | 1.21, 1.22, 1.23 | +| Kubernetes | 1.22, 1.23, 1.24 | 1.22, 1.23, 1.24 | 1.22, 1.23, 1.24 | 1.22, 1.23, 1.24 | 1.22, 1.23, 1.24 | | RHEL | 7.x,8.x | 7.x,8.x | 7.x,8.x | 7.x,8.x | 7.x,8.x | | Ubuntu | 20.04 | 20.04 | 18.04, 20.04 | 18.04, 20.04 | 20.04 | | CentOS | 7.8, 7.9 | 7.8, 7.9 | 7.8, 7.9 | 7.8, 7.9 | 7.8, 7.9 | | SLES | 15SP3 | 15SP3 | 15SP3 | 15SP3 | 15SP3 | -| Red Hat OpenShift | 4.8, 4.8 EUS, 4.9 | 4.8, 4.8 EUS, 4.9 | 4.8, 4.8 EUS, 4.9 | 4.8, 4.8 EUS, 4.9 | 4.8, 4.8 EUS, 4.9 | -| Mirantis Kubernetes Engine | 3.4.x | 3.4.x | 3.5.x | 3.4.x | 3.4.x | -| Google Anthos | 1.6 | 1.8 | no | 1.9 | 1.9 | +| Red Hat OpenShift | 4.9, 4.10, 4.10 EUS | 4.9, 4.10, 4.10 EUS | 4.9, 4.10, 4.10 EUS | 4.9, 4.10, 4.10 EUS | 4.9, 4.10, 4.10 EUS | +| Mirantis Kubernetes Engine | 3.5.x | 3.5.x | 3.5.x | 3.5.x | 3.5.x | +| Google Anthos | 1.9 | 1.8 | no | 1.9 | 1.9 | | VMware Tanzu | no | no | NFS | NFS | NFS | | Rancher Kubernetes Engine | yes | yes | yes | yes | yes | | Amazon Elastic Kubernetes Service
Anywhere | no | yes | no | no | yes | @@ -32,39 +32,40 @@ The CSI Drivers by Dell implement an interface between [CSI](https://kubernetes- ### CSI Driver Capabilities {{
}} -| Features | PowerMax | PowerFlex | Unity | PowerScale | PowerStore | -|--------------------------|:--------:|:---------:|:------:|:----------:|:----------:| -| CSI Driver version | 2.2.0 | 2.2.0 | 2.2.0 | 2.2.0 | 2.2.0 | -| Static Provisioning | yes | yes | yes | yes | yes | -| Dynamic Provisioning | yes | yes | yes | yes | yes | -| Expand Persistent Volume | yes | yes | yes | yes | yes | -| Create VolumeSnapshot | yes | yes | yes | yes | yes | -| Create Volume from Snapshot | yes | yes | yes | yes | yes | -| Delete Snapshot | yes | yes | yes | yes | yes | -| [Access Mode](https://kubernetes.io/docs/concepts/storage/persistent-volumes/#access-modes)| RWO/
RWOP(FC/iSCSI)
RWO/
RWX/
ROX/
RWOP(Raw block) | RWO/ROX/RWOP

RWX (Raw block only) | RWO/ROX/RWOP

RWX (Raw block & NFS only) | RWO/RWX/ROX/
RWOP | RWO/RWOP
(FC/iSCSI)
RWO/
RWX/
ROX/
RWOP
(RawBlock, NFS) | -| CSI Volume Cloning | yes | yes | yes | yes | yes | -| CSI Raw Block Volume | yes | yes | yes | no | yes | -| CSI Ephemeral Volume | no | yes | yes | yes | yes | -| Topology | yes | yes | yes | yes | yes | -| Multi-array | yes | yes | yes | yes | yes | -| Volume Health Monitoring | yes | yes | yes | yes | yes | +| Features | PowerMax | PowerFlex | Unity XT | PowerScale | PowerStore | +|--------------------------|:--------:|:---------:|:---------:|:----------:|:----------:| +| CSI Driver version | 2.3.0 | 2.3.0 | 2.3.0 | 2.3.0 | 2.3.0 | +| Static Provisioning | yes | yes | yes | yes | yes | +| Dynamic Provisioning | yes | yes | yes | yes | yes | +| Expand Persistent Volume | yes | yes | yes | yes | yes | +| Create VolumeSnapshot | yes | yes | yes | yes | yes | +| Create Volume from Snapshot | yes | yes | yes | yes | yes | +| Delete Snapshot | yes | yes | yes | yes | yes | +| [Access Mode](https://kubernetes.io/docs/concepts/storage/persistent-volumes/#access-modes)| **FC/iSCSI:**
RWO/
RWOP
**Raw block:**
RWO/
RWX/
ROX/
RWOP | RWO/ROX/RWOP

RWX (Raw block only) | RWO/ROX/RWOP

RWX (Raw block & NFS only) | RWO/RWX/ROX/
RWOP | RWO/RWOP
(FC/iSCSI)
RWO/
RWX/
ROX/
RWOP
(RawBlock, NFS) | +| CSI Volume Cloning | yes | yes | yes | yes | yes | +| CSI Raw Block Volume | yes | yes | yes | no | yes | +| CSI Ephemeral Volume | no | yes | yes | yes | yes | +| Topology | yes | yes | yes | yes | yes | +| Multi-array | yes | yes | yes | yes | yes | +| Volume Health Monitoring | yes | yes | yes | yes | yes | {{
}} ### Supported Storage Platforms {{}} -| | PowerMax | PowerFlex | Unity | PowerScale | PowerStore | +| | PowerMax | PowerFlex | Unity XT | PowerScale | PowerStore | |---------------|:-------------------------------------------------------:|:----------------:|:--------------------------:|:----------------------------------:|:----------------:| -| Storage Array |5978.479.479, 5978.711.711
Unisphere 9.2| 3.5.x, 3.6.x | 5.0.7, 5.1.0, 5.1.2 | OneFS 8.1, 8.2, 9.0, 9.1, 9.2, 9.3 | 1.0.x, 2.0.x, 2.1.x | +| Storage Array |5978.479.479, 5978.711.711
Unisphere 9.2| 3.5.x, 3.6.x | 5.0.7, 5.1.0, 5.1.2 | OneFS 8.1, 8.2, 9.0, 9.1, 9.2, 9.3, 9.4 | 1.0.x, 2.0.x, 2.1.x, 3.0 | {{
}} ### Backend Storage Details {{}} -| Features | PowerMax | PowerFlex | Unity | PowerScale | PowerStore | +| Features | PowerMax | PowerFlex | Unity XT | PowerScale | PowerStore | |---------------|:----------------:|:------------------:|:----------------:|:----------------:|:----------------:| | Fibre Channel | yes | N/A | yes | N/A | yes | | iSCSI | yes | N/A | yes | N/A | yes | | NVMeTCP | N/A | N/A | N/A | N/A | yes | +| NVMeFC | N/A | N/A | N/A | N/A | yes | | NFS | N/A | N/A | yes | yes | yes | | Other | N/A | ScaleIO protocol | N/A | N/A | N/A | | Supported FS | ext4 / xfs | ext4 / xfs | ext3 / ext4 / xfs / NFS | NFS | ext3 / ext4 / xfs / NFS | | Thin / Thick provisioning | Thin | Thin | Thin/Thick | N/A | Thin | | Platform-specific configurable settings | Service Level selection
iSCSI CHAP | - | Host IO Limit
Tiering Policy
NFS Host IO size
Snapshot Retention duration | Access Zone
NFS version (3 or 4);Configurable Export IPs | iSCSI CHAP | -{{
}} +{{}} \ No newline at end of file diff --git a/content/v1/csidriver/archives/_index.md b/content/v1/csidriver/archives/_index.md deleted file mode 100644 index c6df42da23..0000000000 --- a/content/v1/csidriver/archives/_index.md +++ /dev/null @@ -1,43 +0,0 @@ ---- -title: Archives -description: Product Guide and Release Notes for previous versions of Dell CSI drivers ---- - -## PowerScale -### v1.3 --[Release Notes](/pdf/RN_isilon.pdf) - --[Product Guide](/pdf/PG_isilon.pdf) - -### v1.2 - --[Release Notes](/pdf/RN_isilon_2.pdf) - --[Product Guide](/pdf/PG_isilon_2.pdf) - -## PowerMax - -### v1.4 --[Release Notes](/pdf/RN_powermax.pdf) - --[Product Guide](/pdf/PG_powermax.pdf) - -## PowerFlex - -### v1.2 --[Release Notes](/pdf/RN_vxflex.pdf) - --[Product Guide](/pdf/PG_vxflex.pdf) - -## PowerStore -### v1.1 --[Release Notes](/pdf/RN_powerstore.pdf) - --[Product Guide](/pdf/PG_powerstore.pdf) - -## Unity -### v1.3 --[Release Notes](/pdf/RN_unity.pdf) - --[Product Guide](/pdf/PG_unity.pdf) - diff --git a/content/v1/csidriver/features/powerflex.md b/content/v1/csidriver/features/powerflex.md index 6353aa6f58..cfc331a718 100644 --- a/content/v1/csidriver/features/powerflex.md +++ b/content/v1/csidriver/features/powerflex.md @@ -7,7 +7,7 @@ Description: Code features for PowerFlex Driver ## Volume Snapshot Feature -The CSI PowerFlex driver version 2.0 and higher supports v1 snapshots on Kubernetes 1.21/1.22/1.23. +The CSI PowerFlex driver versions 2.0 and higher support v1 snapshots. In order to use Volume Snapshots, ensure the following components are deployed to your cluster: - Kubernetes Volume Snapshot CRDs @@ -82,35 +82,7 @@ spec: ## Create Consistent Snapshot of Group of Volumes -This feature extends CSI specification to add the capability to create crash-consistent snapshots of a group of volumes. This feature is available as a technical preview. To use this feature, users have to deploy the csi-volumegroupsnapshotter side-car as part of the PowerFlex driver. Once the sidecar has been deployed, users can make snapshots by using yaml files such as this one: -``` -apiVersion: volumegroup.storage.dell.com/v1 -kind: DellCsiVolumeGroupSnapshot -metadata: - name: "vg-snaprun1" - namespace: "helmtest-vxflexos" -spec: - # Add fields here - driverName: "csi-vxflexos.dellemc.com" - # defines how to process VolumeSnapshot members when volume group snapshot is deleted - # "Retain" - keep VolumeSnapshot instances - # "Delete" - delete VolumeSnapshot instances - memberReclaimPolicy: "Retain" - volumesnapshotclass: "vxflexos-snapclass" - pvcLabel: "vgs-snap-label" - # pvcList: - # - "pvcName1" - # - "pvcName2" -``` -The pvcLabel field specifies a label that must be present in PVCs that are to be snapshotted. Here is a sample of that portion of a .yaml for a PVC: -``` -metadata: - name: pvol0 - namespace: helmtest-vxflexos - labels: - volume-group: vgs-snap-label -``` -More details about the installation and use of the VolumeGroup Snapshotter can be found here: [dell-csi-volumegroup-snapshotter](https://github.com/dell/csi-volumegroup-snapshotter). +This feature extends CSI specification to add the capability to create crash-consistent snapshots of a group of volumes. This feature is available as a technical preview. To use this feature, users have to deploy the csi-volumegroupsnapshotter side-car as part of the PowerFlex driver. Once the sidecar has been deployed, users can make snapshots by using yaml files, More information can be found here: [Volume Group Snapshotter](../../../snapshots/volume-group-snapshots/). ## Volume Expansion Feature @@ -398,9 +370,9 @@ controller: - key: "node-role.kubernetes.io/master" operator: "Exists" effect: "NoSchedule" -``` +``` > *NOTE:* Tolerations/selectors work the same way for node pods. - + For configuring Controller HA on the Dell CSI Operator, please refer to the [Dell CSI Operator documentation](../../installation/operator/#custom-resource-specification). ## SDC Deployment @@ -450,7 +422,7 @@ There is a sample yaml file in the samples folder under the top-level directory endpoint: "https://127.0.0.2" skipCertificateValidation: true mdm: "10.0.0.3,10.0.0.4" - ``` + ``` Here we specify that we want the CSI driver to manage two arrays: one with an IP `127.0.0.1` and the other with an IP `127.0.0.2`. To use this config we need to create a Kubernetes secret from it. To do so, run the following command: @@ -546,7 +518,7 @@ To run the corresponding helm test, go to csi-vxflexos/test/helm/ephemeral and f Then run: ```` ./testEphemeral.sh -```` +```` this test deploys the pod with two ephemeral volumes, and write some data to them before deleting the pod. When creating ephemeral volumes, it is important to specify the following within the volumeAttributes section: volumeName, size, storagepool, and if you want to use a non-default array, systemID. @@ -587,7 +559,7 @@ Events: Type Reason Age From Message ---- ------ ---- ---- ------ Warning VolumeConditionAbnormal 32s csi-pv-monitor-controller-csi-vxflexos.dellemc.com Volume is not found at 2021-11-03 20:31:04 -``` +``` Events will also be reported to pods that have abnormal volumes. In these two events from `kubectl describe pods -n `, we can see that this pod has two abnormal volumes: one volume was unmounted outside of Kubernetes, while another was deleted from PowerFlex array. ``` Events: diff --git a/content/v1/csidriver/features/powermax.md b/content/v1/csidriver/features/powermax.md index a635b79ec6..697c1040b1 100644 --- a/content/v1/csidriver/features/powermax.md +++ b/content/v1/csidriver/features/powermax.md @@ -399,7 +399,7 @@ After a successful installation of the driver, if a node Pod is running successf The values for all these keys are always set to the name of the provisioner which is usually `csi-powermax.dellemc.com`. -> *NOTE:* The Topology support does not include any customer-defined topology, that is, users cannot create their own labels for nodes and storage classes and expect the labels to be honored by the driver. +Starting from version 2.3.0, topology keys have been enhanced to filter out arrays, associated transport protocol available to each node and create topology keys based on any such user input. ### Topology Usage To use the Topology feature, the storage classes must be modified as follows: @@ -437,6 +437,80 @@ on any worker node with access to the PowerMax array `000000000001` irrespective For additional information on how to use _Topology aware Volume Provisioning_, see the [Kubernetes Topology documentation](https://kubernetes-csi.github.io/docs/topology.html). +### Custom Topology keys +To use the enhanced topology keys: +1. To use this feature, set node.topologyControl.enabled to true. +2. Edit the config file [topologyConfig.yaml](https://github.com/dell/csi-powermax/blob/main/samples/configmap/topologyConfig.yaml) in `csi-powermax/samples/configmap` folder and provide values for the following parameters. + +| Parameter | Description | +|-----------|--------------| +| allowedConnections | List of node, array and protocol info for user allowed configuration | +| allowedConnections.nodeName | Name of the node on which user wants to apply given rules | +| allowedConnections.rules | List of StorageArrayID:TransportProtocol pair | +| deniedConnections | List of node, array and protocol info for user denied configuration | +| deniedConnections.nodeName | Name of the node on which user wants to apply given rules | +| deniedConnections.rules | List of StorageArrayID:TransportProtocol pair | + +
+ +**Sample config file:** + +``` +# allowedConnections contains a list of (node, array and protocol) info for user allowed configuration +# For any given storage array ID and protocol on a Node, topology keys will be created for just those pair and +# every other configuration is ignored +# Please refer to the doc website about a detailed explanation of each configuration parameter +# and the various possible inputs +allowedConnections: + # nodeName: Name of the node on which user wants to apply given rules + # Allowed values: + # nodeName - name of a specific node + # * - all the nodes + # Examples: "node1", "*" + - nodeName: "node1" + # rules is a list of 'StorageArrayID:TransportProtocol' pair. ':' is required between both value + # Allowed values: + # StorageArrayID: + # - SymmetrixID : for specific storage array + # - "*" :- for all the arrays connected to the node + # TransportProtocol: + # - FC : Fibre Channel protocol + # - ISCSI : iSCSI protocol + # - "*" - for all the possible Transport Protocol + # Examples: "000000000001:FC", "000000000002:*", "*:FC", "*:*" + rules: + - "000000000001:FC" + - "000000000002:FC" + - nodeName: "*" + rules: + - "000000000002:FC" +# deniedConnections contains a list of (node, array and protocol) info for denied configurations by user +# For any given storage array ID and protocol on a Node, topology keys will be created for every other configuration but +# not these input pairs +deniedConnections: + - nodeName: "node2" + rules: + - "000000000002:*" + - nodeName: "node3" + rules: + - "*:*" +``` + +3. Use the below command to create ConfigMap with configmap name as `node-topology-config` in the namespace powermax, + +`kubectl create configmap node-topology-config --from-file=topologyConfig.yaml -n powermax` + +For example, let there be 3 nodes and 2 arrays, so based on the sample config file above, topology keys will be created as below: + +New Topology keys +N1: csi-driver/000000000001.FC:csi-driver, csi-driver/000000000002.FC:csi-driver +
+N2 and N3: None + + +>Note: Name of the configmap should always be `node-topology-config`. + + ## Dynamic Logging Configuration This feature is introduced in CSI Driver for PowerMax version 2.0.0. diff --git a/content/v1/csidriver/features/powerstore.md b/content/v1/csidriver/features/powerstore.md index 1f5b1fb50e..e4a3103b11 100644 --- a/content/v1/csidriver/features/powerstore.md +++ b/content/v1/csidriver/features/powerstore.md @@ -541,7 +541,7 @@ The value of that parameter is added as an additional entry to NFS Export host a For example the following notation: ```yaml externalAccess: "10.0.0.0/24" -``` +``` This means that we allow for NFS Export created by driver to be consumed by address range `10.0.0.0-10.0.0.255`. @@ -668,10 +668,65 @@ nfsAcls: "A::OWNER@:rwatTnNcCy,A::GROUP@:rxtncy,A::EVERYONE@:rxtncy,A::user@doma >POSIX ACLs are not supported and only POSIX mode bits are supported for NFSv3 shares. -## NVMe/TCP Support - -CSI Driver for Dell Powerstore 2.2.0 and above supports NVMe/TCP provisioning. To enable NVMe/TCP provisioning, blockProtocol on secret should be specified as `NVMeTCP`. -In case blockProtocol is specified as `auto`, the driver will be able to find the initiators on the host and choose the protocol accordingly. If the host has multiple protocols enabled, then FC gets the highest priority followed by iSCSI and then NVMeTCP. +## NVMe Support +**NVMeTCP Support** +CSI Driver for Dell Powerstore 2.2.0 and above supports NVMe/TCP provisioning. To enable NVMe/TCP provisioning, blockProtocol on secret should be specified as `NVMeTCP`. >Note: NVMe/TCP is not supported on RHEL 7.x versions and CoreOS. >NVMe/TCP is supported with Powerstore 2.1 and above. + +**NVMeFC Support** +CSI Driver for Dell Powerstore 2.3.0 and above supports NVMe/FC provisioning. To enable NVMe/FC provisioning, blockProtocol on secret should be specified as `NVMeFC`. +>NVMe/FC is supported with Powerstore 3.0 and above. + +>NVMe-FC feature is supported with Helm. + +>Note: +> In case blockProtocol is specified as `auto`, the driver will be able to find the initiators on the host and choose the protocol accordingly. If the host has multiple protocols enabled, then NVMeFC gets the highest priority followed by NVMeTCP, followed by FC and then iSCSI. + +## Volume group snapshot Support + +CSI Driver for Dell Powerstore 2.3.0 and above supports creating volume groups and take snapshot of them by making use of CRD (Custom Resource Definition). More information can be found here: [Volume Group Snapshotter](../../../snapshots/volume-group-snapshots/). + +## Configurable Volume Attributes (Optional) + +The CSI PowerStore driver version 2.3.0 and above supports Configurable volume atttributes. + +PowerStore array provides a set of optional volume creation attributes. These attributes can be configured for the volume (block and NFS) at the time of creation through PowerStore CSI driver. +These attributes can be specified as labels in PVC yaml file. The following is a sample manifest for creating volume with some of the configurable volume attributes. + +```yaml +apiVersion: v1 +kind: PersistentVolumeClaim +metadata: + name: pvc1 + namespace: default + labels: + description: DB-volume + appliance_id: A1 + volume_group_id: f5f9dbbd-d12f-463e-becb-2e6d0a85405e +spec: + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 8Gi + storageClassName: powerstore-ext4 + +``` + +>Note: Default description value is `pvcName-pvcNamespace`. + +The following is the list of all the attribtues supported by PowerStore CSI driver: + +| Block Volume | NFS Volume | +| --- | --- | +| description
appliance_id
volume_group_id
protection_policy_id
performance_policy_id
app_type
app_type_other





| description
config_type
access_policy
locking_policy
folder_rename_policy
is_async_mtime_enabled
protection_policy_id
file_events_publishing_mode
host_io_size
flr_attributes.flr_create.mode
flr_attributes.flr_create.default_retention
flr_attributes.flr_create.maximum_retention
flr_attributes.flr_create.minimum_retention | + +
+ +**Note:** +>Refer to the PowerStore array specification for the allowed values for each attribute, at `https:///swaggerui/`. +>Make sure that the attributes specified are supported by the version of PowerStore array used. + +>Configurable Volume Attributes feature is supported with Helm. diff --git a/content/v1/csidriver/features/unity.md b/content/v1/csidriver/features/unity.md index 7559245396..4cac022944 100644 --- a/content/v1/csidriver/features/unity.md +++ b/content/v1/csidriver/features/unity.md @@ -1,6 +1,6 @@ --- -title: Unity -Description: Code features for Unity Driver +title: Unity XT +Description: Code features for Unity XT Driver weight: 1 --- @@ -30,9 +30,9 @@ kubectl delete -f test/sample.yaml ## Consuming existing volumes with static provisioning -You can use existent volumes from Unity array as Persistent Volumes in your Kubernetes, to do that you must perform the following steps: +You can use existent volumes from Unity XT array as Persistent Volumes in your Kubernetes, to do that you must perform the following steps: -1. Open your volume in Unity Management UI (Unisphere), and take a note of volume-id. The `volume-id` looks like `csiunity-xxxxx` and CLI ID looks like `sv_xxxx`. +1. Open your volume in Unity XT Management UI (Unisphere), and take a note of volume-id. The `volume-id` looks like `csiunity-xxxxx` and CLI ID looks like `sv_xxxx`. 2. Create PersistentVolume and use this volume-id as a volumeHandle in the manifest. Modify other parameters according to your needs. ```yaml @@ -106,8 +106,6 @@ In order to use Volume Snapshots, ensure the following components have been depl ### Volume Snapshot Class -During the installation of the CSI Unity 2.0 driver and higher, a Volume Snapshot Class is not created and need to create Volume Snapshot Class. - Following is the manifest to create Volume Snapshot Class : ```yaml @@ -146,7 +144,7 @@ status: readyToUse: true ``` Note : -For CSI Driver for Unity version 1.6 and later, `dell-csi-helm-installer` does not create any Volume Snapshot classes as part of the driver installation. A set of annotated volume snapshot class manifests have been provided in the `csi-unity/samples/volumesnapshotclass/` folder. Use these samples to create new Volume Snapshot to provision storage. +A set of annotated volume snapshot class manifests have been provided in the [csi-unity/samples/volumesnapshotclass/](https://github.com/dell/csi-unity/tree/main/samples/volumesnapshotclass) folder. Use these samples to create new Volume Snapshot to provision storage. ### Creating PVCs with Volume Snapshots as Source @@ -173,7 +171,7 @@ spec: ## Volume Expansion -The CSI Unity driver version 1.3 and later supports the expansion of Persistent Volumes (PVs). This expansion can be done either online (for example, when a PVC is attached to a node) or offline (for example, when a PVC is not attached to any node). +The CSI Unity XT driver supports the expansion of Persistent Volumes (PVs). This expansion can be done either online (for example, when a PVC is attached to a node) or offline (for example, when a PVC is not attached to any node). To use this feature, the storage class that is used to create the PVC must have the attribute `allowVolumeExpansion` set to true. @@ -215,7 +213,7 @@ spec: ## Raw block support -The CSI Unity driver supports Raw Block Volumes. +The CSI Unity XT driver supports Raw Block Volumes. Raw Block volumes are created using the volumeDevices list in the pod template spec with each entry accessing a volumeClaimTemplate specifying a volumeMode: Block. The following is an example configuration: ```yaml @@ -259,14 +257,14 @@ spec: Access modes allowed are ReadWriteOnce and ReadWriteMany. Raw Block volumes are presented as a block device to the pod by using a bind mount to a block device in the node's file system. The driver does not format or check the format of any file system on the block device. -Raw Block volumes support online Volume Expansion, but it is up to the application to manage to reconfigure the file system (if any) to the new size. Access mode ReadOnlyMany is not supported with raw block since we cannot restrict volumes to be readonly from Unity. +Raw Block volumes support online Volume Expansion, but it is up to the application to manage and reconfigure the file system (if any) to the new size. Access mode ReadOnlyMany is not supported with raw block since we cannot restrict volumes to be readonly from Unity XT. For additional information, see the [kubernetes](https://kubernetes.io/DOCS/CONCEPTS/STORAGE/PERSISTENT-VOLUMES/#volume-mode) website. ## Volume Cloning Feature -The CSI Unity driver version 1.3 and later supports volume cloning. This allows specifying existing PVCs in the _dataSource_ field to indicate a user would like to clone a Volume. +The CSI Unity XT driver supports volume cloning. This allows specifying existing PVCs in the _dataSource_ field to indicate a user would like to clone a Volume. Source and destination PVC must be in the same namespace and have the same Storage Class. @@ -310,11 +308,11 @@ spec: ## Ephemeral Inline Volume -The CSI Unity driver supports ephemeral inline CSI volumes. This feature allows CSI volumes to be specified directly in the pod specification. +The CSI Unity XT driver supports ephemeral inline CSI volumes. This feature allows CSI volumes to be specified directly in the pod specification. At runtime, nested inline volumes follow the ephemeral lifecycle of their associated pods where the driver handles all phases of volume operations as pods are created and destroyed. -The following is a sample manifest for creating ephemeral volume in pod manifest with CSI Unity driver. +The following is a sample manifest for creating ephemeral volume in pod manifest with CSI Unity XT driver. ```yaml kind: Pod @@ -361,9 +359,9 @@ To create `NFS` volume you need to provide `nasName:` parameters that point to t ## Controller HA -The CSI Unity driver supports controller HA feature. Instead of StatefulSet controller pods deployed as a Deployment. +The CSI Unity XT driver supports controller HA feature. Instead of StatefulSet controller pods deployed as a Deployment. -By default, number of replicas is set to 2, you can set the `controllerCount` parameter to 1 in `myvalues.yaml` if you want to disable controller HA for your installation. When installing via Operator you can change the `replicas` parameter in the `spec.driver` section in your Unity Custom Resource. +By default, the number of replicas is set to 2. You can set the controllerCount parameter to 1 in myvalues.yaml if you want to disable controller HA for your installation. When installing via Operator, you can change the replicas parameter in the spec.driver section in your Unity XT Custom Resource. When multiple replicas of controller pods are in a cluster each sidecar (Attacher, Provisioner, Resizer, and Snapshotter) tries to get a lease so only one instance of each sidecar is active in the cluster at a time. @@ -407,7 +405,7 @@ As said before you can configure where node driver pods would be assigned in a s ## Topology -The CSI Unity driver supports Topology which forces volumes to be placed on worker nodes that have connectivity to the backend storage. This covers use cases where users have chosen to restrict the nodes on which the CSI driver is deployed. +The CSI Unity XT driver supports Topology which forces volumes to be placed on worker nodes that have connectivity to the backend storage. This covers use cases where users have chosen to restrict the nodes on which the CSI driver is deployed. This Topology support does not include customer-defined topology, users cannot create their own labels for nodes, they should use whatever labels are returned by the driver and applied automatically by Kubernetes on its nodes. @@ -433,7 +431,7 @@ allowedTopologies: - "true" ``` -This example matches all nodes where the driver has a connection to the Unity array with array ID mentioned via Fiber Channel. Similarly, by replacing `fc` with `iscsi` in the key checks for iSCSI connectivity with the node. +This example matches all nodes where the driver has a connection to the Unity XT array with array ID mentioned via Fiber Channel. Similarly, by replacing `fc` with `iscsi` in the key checks for iSCSI connectivity with the node. You can check what labels your nodes contain by running `kubectl get nodes --show-labels` command. @@ -442,7 +440,7 @@ You can check what labels your nodes contain by running `kubectl get nodes --sho For any additional information about the topology, see the [Kubernetes Topology documentation](https://kubernetes-csi.github.io/docs/topology.html). ## Volume Limit -The CSI Driver for Dell Unity allows users to specify the maximum number of Unity volumes that can be used in a node. +The CSI Driver for Dell Unity XT allows users to specify the maximum number of Unity XT volumes that can be used in a node. The user can set the volume limit for a node by creating a node label `max-unity-volumes-per-node` and specifying the volume limit for that node.
`kubectl label node max-unity-volumes-per-node=` @@ -452,12 +450,12 @@ The user can also set the volume limit for all the nodes in the cluster by speci >**NOTE:**
To reflect the changes after setting the value either via node label or in values.yaml file, user has to bounce the driver controller and node pods using the command `kubectl get pods -n unity --no-headers=true | awk '/unity-/{print $1}'| xargs kubectl delete -n unity pod`.

If the value is set both by node label and values.yaml file then node label value will get the precedence and user has to remove the node label in order to reflect the values.yaml value.

The default value of `maxUnityVolumesPerNode` is 0.

If `maxUnityVolumesPerNode` is set to zero, then Container Orchestration decides how many volumes of this type can be published by the controller to the node.

The volume limit specified to `maxUnityVolumesPerNode` attribute is applicable to all the nodes in the cluster for which node label `max-unity-volumes-per-node` is not set. ## NAT Support -CSI Driver for Dell Unity is supported in the NAT environment for NFS protocol. +CSI Driver for Dell Unity XT is supported in the NAT environment for NFS protocol. The user will be able to install the driver and able to create pods. ## Single Pod Access Mode for PersistentVolumes -CSI Driver for Unity supports a new accessmode `ReadWriteOncePod` for PersistentVolumes and PersistentVolumeClaims. With this feature, CSI Driver for Unity allows to restrict volume access to a single pod in the cluster +CSI Driver for Unity XT supports a new accessmode `ReadWriteOncePod` for PersistentVolumes and PersistentVolumeClaims. With this feature, CSI Driver for Unity XT restricts volume access to a single pod in the cluster Prerequisites 1. Enable the ReadWriteOncePod feature gate for kube-apiserver, kube-scheduler, and kubelet as the ReadWriteOncePod access mode is in alpha for Kubernetes v1.22 and is only supported for CSI volumes. You can enable the feature by setting command line arguments: @@ -477,14 +475,13 @@ spec: ``` ## Volume Health Monitoring -CSI Driver for Unity supports volume health monitoring. This is an alpha feature and requires feature gate to be enabled by setting command line arguments `--feature-gates="...,CSIVolumeHealth=true"`. +CSI Driver for Unity XT supports volume health monitoring. This is an alpha feature and requires feature gate to be enabled by setting command line arguments `--feature-gates="...,CSIVolumeHealth=true"`. This feature: 1. Reports on the condition of the underlying volumes via events when a volume condition is abnormal. We can watch the events on the describe of pvc `kubectl describe pvc -n ` 2. Collects the volume stats. We can see the volume usage in the node logs `kubectl logs -n -c driver` -By default this is disabled in CSI Driver for Unity. You will have to set the `healthMonitor.enable` flag for controller, node or for both in `values.yaml` to get the volume stats and volume condition. +By default this is disabled in CSI Driver for Unity XT. You will have to set the `healthMonitor.enable` flag for controller, node or for both in `values.yaml` to get the volume stats and volume condition. ## Dynamic Logging Configuration -This feature is introduced in CSI Driver for unity version 2.0.0. ### Helm based installation As part of driver installation, a ConfigMap with the name `unity-config-params` is created, which contains an attribute `CSI_LOG_LEVEL` which specifies the current log level of CSI driver. @@ -508,13 +505,11 @@ To update the log level dynamically user has to edit the ConfigMap `unity-config kubectl edit configmap -n unity unity-config-params ``` ->Note: Prior to CSI Driver for unity version 2.0.0, the log level was allowed to be updated dynamically through `logLevel` attribute in the secret object. - -## Tenancy support for Unity NFS +## Tenancy support for Unity XT NFS -The CSI Unity driver version 2.1.0 (and later versions) supports the Tenancy feature of Unity such that the user will be able to associate specific worker nodes (in the cluster) and NFS storage volumes with Tenant. +The CSI Unity XT driver version v2.1.0 (and later versions) supports the Tenancy feature of Unity XT such that the user will be able to associate specific worker nodes (in the cluster) and NFS storage volumes with Tenant. -Prerequisites (to be manually created in Unity Array) before the driver installation: +Prerequisites (to be manually created in Unity XT Array) before the driver installation: * Create Tenants * Create Pools * Create NAS Servers with Tenant and Pool mapping @@ -634,4 +629,4 @@ data: SYNC_NODE_INFO_TIME_INTERVAL: "15" TENANT_NAME: "" ``` ->Note: csi-unity supports Tenancy in multi-array setup, provided the TenantName is the same across Unity instances. +>Note: csi-unity supports Tenancy in multi-array setup, provided the TenantName is the same across Unity XT instances. diff --git a/content/v1/csidriver/installation/helm/isilon.md b/content/v1/csidriver/installation/helm/isilon.md index 08d51943eb..d1ba801503 100644 --- a/content/v1/csidriver/installation/helm/isilon.md +++ b/content/v1/csidriver/installation/helm/isilon.md @@ -25,6 +25,7 @@ The following are requirements to be met before installing the CSI Driver for De - If using Snapshot feature, satisfy all Volume Snapshot requirements - If enabling CSM for Authorization, please refer to the [Authorization deployment steps](../../../../authorization/deployment/) first - If enabling CSM for Replication, please refer to the [Replication deployment steps](../../../../replication/deployment/) first +- If enabling CSM for Resiliency, please refer to the [Resiliency deployment steps](../../../../resiliency/deployment/) first ### Install Helm 3.0 @@ -120,7 +121,7 @@ CRDs should be configured during replication prepare stage with repctl as descri ## Install the Driver **Steps** -1. Run `git clone -b v2.2.0 https://github.com/dell/csi-powerscale.git` to clone the git repository. +1. Run `git clone -b v2.3.0 https://github.com/dell/csi-powerscale.git` to clone the git repository. 2. Ensure that you have created the namespace where you want to install the driver. You can run `kubectl create namespace isilon` to create a new one. The use of "isilon" as the namespace is just an example. You can choose any name for the namespace. 3. Collect information from the PowerScale Systems like IP address, IsiPath, username, and password. Make a note of the value for these parameters as they must be entered in the *secret.yaml*. 4. Copy *the helm/csi-isilon/values.yaml* into a new location with name say *my-isilon-settings.yaml*, to customize settings for installation. @@ -139,6 +140,8 @@ CRDs should be configured during replication prepare stage with repctl as descri | kubeletConfigDir | Specify kubelet config dir path | Yes | "/var/lib/kubelet" | | enableCustomTopology | Indicates PowerScale FQDN/IP which will be fetched from node label and the same will be used by controller and node pod to establish a connection to Array. This requires enableCustomTopology to be enabled. | No | false | | fsGroupPolicy | Defines which FS Group policy mode to be used, Supported modes `None, File and ReadWriteOnceWithFSType` | No | "ReadWriteOnceWithFSType" | + | podmonAPIPort | Defines the port which csi-driver will use within the cluster to support podmon | No | 8083 | + | maxPathLen | Defines the maximum length of path for a volume | No | 192 | | ***controller*** | Configure controller pod specific parameters | | | | controllerCount | Defines the number of csi-powerscale controller pods to deploy to the Kubernetes release| Yes | 2 | | volumeNamePrefix | Defines a string prefix for the names of PersistentVolumes created | Yes | "k8s" | @@ -171,6 +174,9 @@ CRDs should be configured during replication prepare stage with repctl as descri | sidecarProxyImage | Image for csm-authorization-sidecar. | No | " " | | proxyHost | Hostname of the csm-authorization server. | No | Empty | | skipCertificateValidation | A boolean that enables/disables certificate validation of the csm-authorization server. | No | true | + | **podmon** | Podmon is an optional feature under development and tech preview. Enable this feature only after contact support for additional information. | - | - | + | enabled | A boolean that enable/disable podmon feature. | No | false | + | image | image for podmon. | No | " " | *NOTE:* @@ -261,7 +267,7 @@ The CSI driver for Dell PowerScale version 1.5 and later, `dell-csi-helm-install ### What happens to my existing storage classes? -*Upgrading from CSI PowerScale v2.1 driver*: +*Upgrading from CSI PowerScale v2.2 driver*: The storage classes created as part of the installation have an annotation - "helm.sh/resource-policy": keep set. This ensures that even after an uninstall or upgrade, the storage classes are not deleted. You can continue using these storage classes if you wish so. *NOTE*: @@ -283,7 +289,7 @@ Starting CSI PowerScale v1.6, `dell-csi-helm-installer` will not create any Volu ### What happens to my existing Volume Snapshot Classes? -*Upgrading from CSI PowerScale v2.1 driver*: +*Upgrading from CSI PowerScale v2.2 driver*: The existing volume snapshot class will be retained. *Upgrading from an older version of the driver*: diff --git a/content/v1/csidriver/installation/helm/powerflex.md b/content/v1/csidriver/installation/helm/powerflex.md index 9bdb0ccdc0..c021fb43e9 100644 --- a/content/v1/csidriver/installation/helm/powerflex.md +++ b/content/v1/csidriver/installation/helm/powerflex.md @@ -29,6 +29,7 @@ The following are requirements that must be met before installing the CSI Driver - If using Snapshot feature, satisfy all Volume Snapshot requirements - A user must exist on the array with a role _>= FrontEndConfigure_ - If enabling CSM for Authorization, please refer to the [Authorization deployment steps](../../../../authorization/deployment/) first +- If multipath is configured, ensure CSI-PowerFlex volumes are blacklisted by multipathd. See [troubleshooting section](../../../troubleshooting/powerflex.md) for details ### Install Helm 3.0 @@ -109,7 +110,7 @@ kubectl -n kube-system kustomize deploy/kubernetes/snapshot-controller | kubectl ## Install the Driver **Steps** -1. Run `git clone -b v2.2.0 https://github.com/dell/csi-powerflex.git` to clone the git repository. +1. Run `git clone -b v2.3.0 https://github.com/dell/csi-powerflex.git` to clone the git repository. 2. Ensure that you have created a namespace where you want to install the driver. You can run `kubectl create namespace vxflexos` to create a new one. @@ -130,61 +131,36 @@ kubectl -n kube-system kustomize deploy/kubernetes/snapshot-controller | kubectl Example: `samples/config.yaml` - ```yaml - # Username for accessing PowerFlex system. - # If authorization is enabled, username will be ignored. - - username: "admin" - # Password for accessing PowerFlex system. - # If authorization is enabled, password will be ignored. - password: "password" - # System name/ID of PowerFlex system. - systemID: "ID1" - # Previous names of PowerFlex system if used for PV. - allSystemNames: "pflex-1,pflex-2" - # REST API gateway HTTPS endpoint for PowerFlex system. - # If authorization is enabled, endpoint should be the HTTPS localhost endpoint that - # the authorization sidecar will listen on - endpoint: "https://127.0.0.1" - # Determines if the driver is going to validate certs while connecting to PowerFlex REST API interface. - # Allowed values: true or false - # Default value: true - skipCertificateValidation: true - # indicates if this array is the default array - # needed for backwards compatibility - # only one array is allowed to have this set to true - # Default value: false - isDefault: true - # defines the MDM(s) that SDC should register with on start. - # Allowed values: a list of IP addresses or hostnames separated by comma. - # Default value: none - mdm: "10.0.0.1,10.0.0.2" - - username: "admin" - password: "Password123" - systemID: "ID2" - endpoint: "https://127.0.0.2" - skipCertificateValidation: true - mdm: "10.0.0.3,10.0.0.4" - ``` - - After editing the file, run the following command to create a secret called `vxflexos-config`: +```yaml +- username: "admin" + password: "Password123" + systemID: "ID2" + endpoint: "https://127.0.0.2" + skipCertificateValidation: true + isDefault: true + mdm: "10.0.0.3,10.0.0.4" +``` + *NOTE: To use multiple arrays, copy and paste section above for each array. Make sure isDefault is set to true for only one array.* + +After editing the file, run the below command to create a secret called `vxflexos-config`: `kubectl create secret generic vxflexos-config -n vxflexos --from-file=config=samples/config.yaml` - Use the following command to replace or update the secret: +Use the below command to replace or update the secret: `kubectl create secret generic vxflexos-config -n vxflexos --from-file=config=samples/config.yaml -o yaml --dry-run=client | kubectl replace -f -` - *NOTE:* +*NOTE:* - - The user needs to validate the YAML syntax and array-related key/values while replacing the vxflexos-creds secret. - - If you want to create a new array or update the MDM values in the secret, you will need to reinstall the driver. If you change other details, such as login information, the secret will dynamically update -- see [dynamic-array-configuration](../../../features/powerflex#dynamic-array-configuration) for more details. - - Old `json` format of the array configuration file is still supported in this release. If you already have your configuration in `json` format, you may continue to maintain it or you may transfer this configuration to `yaml` - format and replace/update the secret. - - "insecure" parameter has been changed to "skipCertificateValidation" as insecure is deprecated and will be removed from use in config.yaml or secret.yaml in a future release. Users can continue to use any one of "insecure" or "skipCertificateValidation" for now. The driver would return an error if both parameters are used. - - Please note that log configuration parameters from v1.5 will no longer work in v2.0 and higher. Please refer to the [Dynamic Logging Configuration](../../../features/powerflex#dynamic-logging-configuration) section in Features for more information. - - If the user is using complex K8s version like "v1.21.3-mirantis-1", use below kubeVersion check in helm/csi-unity/Chart.yaml file. +- The user needs to validate the YAML syntax and array-related key/values while replacing the vxflexos-creds secret. +- If you want to create a new array or update the MDM values in the secret, you will need to reinstall the driver. If you change other details, such as login information, the secret will dynamically update -- see [dynamic-array-configuration](../../../features/powerflex#dynamic-array-configuration) for more details. +- Old `json` format of the array configuration file is still supported in this release. If you already have your configuration in `json` format, you may continue to maintain it or you may transfer this configuration to `yaml`format and replace/update the secret. +- "insecure" parameter has been changed to "skipCertificateValidation" as insecure is deprecated and will be removed from use in config.yaml or secret.yaml in a future release. Users can continue to use any one of "insecure" or "skipCertificateValidation" for now. The driver would return an error if both parameters are used. +- Please note that log configuration parameters from v1.5 will no longer work in v2.0 and higher. Please refer to the [Dynamic Logging Configuration](../../../features/powerflex#dynamic-logging-configuration) section in Features for more information. +- If the user is using complex K8s version like "v1.21.3-mirantis-1", use this kubeVersion check in helm/csi-unity/Chart.yaml file. kubeVersion: ">= 1.21.0-0 < 1.24.0-0" - + + 5. Default logging options are set during Helm install. To see possible configuration options, see the [Dynamic Logging Configuration](../../../features/powerflex#dynamic-logging-configuration) section in Features. 6. If using automated SDC deployment: @@ -206,6 +182,7 @@ kubectl -n kube-system kustomize deploy/kubernetes/snapshot-controller | kubectl | logFormat | CSI driver log format. Allowed values: "TEXT" or "JSON". | Yes | "TEXT" | | kubeletConfigDir | kubelet config directory path. Ensure that the config.yaml file is present at this path. | Yes | /var/lib/kubelet | | defaultFsType | Used to set the default FS type which will be used for mount volumes if FsType is not specified in the storage class. Allowed values: ext4, xfs. | Yes | ext4 | +| fsGroupPolicy | Defines which FS Group policy mode to be used. Supported modes are`None, File, and ReadWriteOnceWithFSType.` | No | "ReadWriteOnceWithFSType" | | imagePullPolicy | Policy to determine if the image should be pulled prior to starting the container. Allowed values: Always, IfNotPresent, Never. | Yes | IfNotPresent | | enablesnapshotcgdelete | A boolean that, when enabled, will delete all snapshots in a consistency group everytime a snap in the group is deleted. | Yes | false | | enablelistvolumesnapshot | A boolean that, when enabled, will allow list volume operation to include snapshots (since creating a volume from a snap actually results in a new snap). It is recommend this be false unless instructed otherwise. | Yes | false | @@ -221,14 +198,13 @@ kubectl -n kube-system kustomize deploy/kubernetes/snapshot-controller | kubectl | enabled | Enable/Disable deployment of external health monitor sidecar. | No | false | | volumeHealthMonitorInterval | Interval of monitoring volume health condition. Allowed values: Number followed by unit (s,m,h)| No | 60s | | **node** | This section allows the configuration of node-specific parameters. | - | - | +| healthMonitor.enabled | Enable/Disable health monitor of CSI volumes- volume usage, volume condition | No | false | | nodeSelector | Defines what nodes would be selected for pods of node daemonset. Leave as blank to use all nodes. | Yes | " " | | tolerations | Defines tolerations that would be applied to node daemonset. Leave as blank to install node driver only on worker nodes. | Yes | " " | | **monitor** | This section allows the configuration of the SDC monitoring pod. | - | - | | enabled | Set to enable the usage of the monitoring pod. | Yes | false | | hostNetwork | Set whether the monitor pod should run on the host network or not. | Yes | true | | hostPID | Set whether the monitor pod should run in the host namespace or not. | Yes | true | -| **healthMonitor** | This section configures node side volume health monitoring | - | -| -| enabled| Enable/Disable health monitor of CSI volumes- volume usage, volume condition | No | false | | **vgsnapshotter** | This section allows the configuration of the volume group snapshotter(vgsnapshotter) pod. | - | - | | enabled | A boolean that enable/disable vg snapshotter feature. | No | false | | image | Image for vg snapshotter. | No | " " | @@ -338,8 +314,8 @@ Starting CSI PowerFlex v1.5, `dell-csi-helm-installer` will not create any Volum ### What happens to my existing Volume Snapshot Classes? -*Upgrading from CSI PowerFlex v2.1 driver*: +*Upgrading from CSI PowerFlex v2.2 driver*: The existing volume snapshot class will be retained. *Upgrading from an older version of the driver*: -It is strongly recommended to upgrade the earlier versions of CSI PowerFlex to 1.5 or higher, before upgrading to 2.2. +It is strongly recommended to upgrade the earlier versions of CSI PowerFlex to 1.5 or higher, before upgrading to 2.3. diff --git a/content/v1/csidriver/installation/helm/powermax.md b/content/v1/csidriver/installation/helm/powermax.md index ef8882ce05..d63d770012 100644 --- a/content/v1/csidriver/installation/helm/powermax.md +++ b/content/v1/csidriver/installation/helm/powermax.md @@ -162,7 +162,7 @@ CRDs should be configured during replication prepare stage with repctl as descri **Steps** -1. Run `git clone -b v2.2.0 https://github.com/dell/csi-powermax.git` to clone the git repository. This will include the Helm charts and dell-csi-helm-installer scripts. +1. Run `git clone -b v2.3.0 https://github.com/dell/csi-powermax.git` to clone the git repository. This will include the Helm charts and dell-csi-helm-installer scripts. 2. Ensure that you have created a namespace where you want to install the driver. You can run `kubectl create namespace powermax` to create a new one 3. Edit the `samples/secret/secret.yaml file, point to the correct namespace, and replace the values for the username and password parameters. These values can be obtained using base64 encoding as described in the following example: @@ -178,16 +178,40 @@ CRDs should be configured during replication prepare stage with repctl as descri | Parameter | Description | Required | Default | |-----------|--------------|------------|----------| +| **global**| This section refers to configuration options for both CSI PowerMax Driver and Reverse Proxy | - | - | +|defaultCredentialsSecret| This secret name refers to:
1. The Unisphere credentials if the driver is installed without proxy or with proxy in Linked mode.
2. The proxy credentials if the driver is installed with proxy in StandAlone mode.
3. The default Unisphere credentials if credentialsSecret is not specified for a management server.| Yes | powermax-creds | +| storageArrays| This section refers to the list of arrays managed by the driver and Reverse Proxy in StandAlone mode.| - | - | +| storageArrayId | This refers to PowerMax Symmetrix ID.| Yes | 000000000001| +| endpoint | This refers to the URL of the Unisphere server managing _storageArrayId_. If authorization is enabled, endpoint should be the HTTPS localhost endpoint that the authorization sidecar will listen on| Yes if Reverse Proxy mode is _StandAlone_ | https://primary-1.unisphe.re:8443 | +| backupEndpoint | This refers to the URL of the backup Unisphere server managing _storageArrayId_, if Reverse Proxy is installed in _StandAlone_ mode. If authorization is enabled, backupEndpoint should be the HTTPS localhost endpoint that the authorization sidecar will listen on| No | https://backup-1.unisphe.re:8443 | +| managementServers | This section refers to the list of configurations for Unisphere servers managing powermax arrays.| - | - | +| endpoint | This refers to the URL of the Unisphere server. If authorization is enabled, endpoint should be the HTTPS localhost endpoint that the authorization sidecar will listen on | Yes | https://primary-1.unisphe.re:8443 | +| credentialsSecret| This refers to the user credentials for _endpoint_ | No| primary-1-secret| +| skipCertificateValidation | This parameter should be set to false if you want to do client-side TLS verification of Unisphere for PowerMax SSL certificates.| No | "True" | +| certSecret | The name of the secret in the same namespace containing the CA certificates of the Unisphere server | Yes, if skipCertificateValidation is set to false | Empty| +| limits | This refers to various limits for Reverse Proxy | No | - | +| maxActiveRead | This refers to the maximum concurrent READ request handled by the reverse proxy.| No | 5 | +| maxActiveWrite | This refers to the maximum concurrent WRITE request handled by the reverse proxy.| No | 4 | +| maxOutStandingRead | This refers to maximum queued READ request when reverse proxy receives more than _maxActiveRead_ requests. | No | 50 | +| maxOutStandingWrite| This refers to maximum queued WRITE request when reverse proxy receives more than _maxActiveWrite_ requests.| No | 50 | | kubeletConfigDir | Specify kubelet config dir path | Yes | /var/lib/kubelet | | imagePullPolicy | The default pull policy is IfNotPresent which causes the Kubelet to skip pulling an image if it already exists. | Yes | IfNotPresent | | clusterPrefix | Prefix that is used during the creation of various masking-related entities (Storage Groups, Masking Views, Hosts, and Volume Identifiers) on the array. The value that you specify here must be unique. Ensure that no other CSI PowerMax driver is managing the same arrays that are configured with the same prefix. The maximum length for this prefix is three characters. | Yes | "ABC" | +| logLevel | CSI driver log level. Allowed values: "error", "warn"/"warning", "info", "debug". | Yes | "debug" | +| logFormat | CSI driver log format. Allowed values: "TEXT" or "JSON". | Yes | "TEXT" | +| kubeletConfigDir | kubelet config directory path. Ensure that the config.yaml file is present at this path. | Yes | /var/lib/kubelet | | defaultFsType | Used to set the default FS type for external provisioner | Yes | ext4 | | portGroups | List of comma-separated port group names. Any port group that is specified here must be present on all the arrays that the driver manages. | For iSCSI Only | "PortGroup1, PortGroup2, PortGroup3" | -| storageResourcePool | This parameter must mention one of the SRPs on the PowerMax array that the symmetrixID specifies. This value is used to create the default storage class. | Yes| "SRP_1" | -| serviceLevel | This parameter must mention one of the Service Levels on the PowerMax array. This value is used to create the default storage class. | Yes| "Bronze" | | skipCertificateValidation | Skip client-side TLS verification of Unisphere certificates | No | "True" | | transportProtocol | Set the preferred transport protocol for the Kubernetes cluster which helps the driver choose between FC and iSCSI when a node has both FC and iSCSI connectivity to a PowerMax array.| No | Empty| | nodeNameTemplate | Used to specify a template that will be used by the driver to create Host/IG names on the PowerMax array. To use the default naming convention, leave this value empty. | No | Empty| +| modifyHostName | Change any existing host names. When nodenametemplate is set, it changes the name to the specified format else it uses driver default host name format. | No | false | +| powerMaxDebug | Enables low level and http traffic logging between the CSI driver and Unisphere. Don't enable this unless asked to do so by the support team. | No | false | +| enableCHAP | Determine if the driver is going to configure SCSI node databases on the nodes with the CHAP credentials. If enabled, the CHAP secret must be provided in the credentials secret and set to the key "chapsecret" | No | false | +| fsGroupPolicy | Defines which FS Group policy mode to be used, Supported modes `None, File and ReadWriteOnceWithFSType` | No | "ReadWriteOnceWithFSType" | +| version | Current version of the driver. Don't modify this value as this value will be used by the install script. | Yes | v2.3.0 | +| images | Defines the container images used by the driver. | - | - | +| driverRepository | Defines the registry of the container image used for the driver. | Yes | dellemc | | **controller** | Allows configuration of the controller-specific parameters.| - | - | | controllerCount | Defines the number of csi-powerscale controller pods to deploy to the Kubernetes release| Yes | 2 | | volumeNamePrefix | Defines a string prefix for the names of PersistentVolumes created | Yes | "k8s" | @@ -202,25 +226,10 @@ CRDs should be configured during replication prepare stage with repctl as descri | tolerations | Add tolerations as per requirement | No | - | | nodeSelector | Add node selectors as per requirement | No | - | | healthMonitor.enabled | Allows to enable/disable volume health monitor | No | false | -| **global**| This section refers to configuration options for both CSI PowerMax Driver and Reverse Proxy | - | - | -|defaultCredentialsSecret| This secret name refers to:
1. The Unisphere credentials if the driver is installed without proxy or with proxy in Linked mode.
2. The proxy credentials if the driver is installed with proxy in StandAlone mode.
3. The default Unisphere credentials if credentialsSecret is not specified for a management server.| Yes | powermax-creds | -| storageArrays| This section refers to the list of arrays managed by the driver and Reverse Proxy in StandAlone mode.| - | - | -| storageArrayId | This refers to PowerMax Symmetrix ID.| Yes | 000000000001| -| endpoint | This refers to the URL of the Unisphere server managing _storageArrayId_. If authorization is enabled, endpoint should be the HTTPS localhost endpoint that the authorization sidecar will listen on| Yes if Reverse Proxy mode is _StandAlone_ | https://primary-1.unisphe.re:8443 | -| backupEndpoint | This refers to the URL of the backup Unisphere server managing _storageArrayId_, if Reverse Proxy is installed in _StandAlone_ mode. If authorization is enabled, backupEndpoint should be the HTTPS localhost endpoint that the authorization sidecar will listen on| No | https://backup-1.unisphe.re:8443 | -| managementServers | This section refers to the list of configurations for Unisphere servers managing powermax arrays.| - | - | -| endpoint | This refers to the URL of the Unisphere server. If authorization is enabled, endpoint should be the HTTPS localhost endpoint that the authorization sidecar will listen on | Yes | https://primary-1.unisphe.re:8443 | -| credentialsSecret| This refers to the user credentials for _endpoint_ | No| primary-1-secret| -| skipCertificateValidation | This parameter should be set to false if you want to do client-side TLS verification of Unisphere for PowerMax SSL certificates.| No | "True" | -| certSecret | The name of the secret in the same namespace containing the CA certificates of the Unisphere server | Yes, if skipCertificateValidation is set to false | Empty| -| limits | This refers to various limits for Reverse Proxy | No | - | -| maxActiveRead | This refers to the maximum concurrent READ request handled by the reverse proxy.| No | 5 | -| maxActiveWrite | This refers to the maximum concurrent WRITE request handled by the reverse proxy.| No | 4 | -| maxOutStandingRead | This refers to maximum queued READ request when reverse proxy receives more than _maxActiveRead_ requests. | No | 50 | -| maxOutStandingWrite| This refers to maximum queued WRITE request when reverse proxy receives more than _maxActiveWrite_ requests.| No | 50 | +| topologyControl.enabled | Allows to enable/disable topology control to filter topology keys | No | false | | **csireverseproxy**| This section refers to the configuration options for CSI PowerMax Reverse Proxy | - | - | | enabled | Boolean parameter which indicates if CSI PowerMax Reverse Proxy is going to be configured and installed.
**NOTE:** If not enabled, then there is no requirement to configure any of the following values. | No | "False" | -| image | This refers to the image of the CSI Powermax Reverse Proxy container. | Yes | dellemc/csipowermax-reverseproxy:v1.4.0 | +| image | This refers to the image of the CSI Powermax Reverse Proxy container. | Yes | dellemc/csipowermax-reverseproxy:v2.1.0 | | tlsSecret | This refers to the TLS secret of the Reverse Proxy Server.| Yes | csirevproxy-tls-secret | | deployAsSidecar | If set to _true_, the Reverse Proxy is installed as a sidecar to the driver's controller pod otherwise it is installed as a separate deployment.| Yes | "True" | | port | Specify the port number that is used by the NodePort service created by the CSI PowerMax Reverse Proxy installation| Yes | 2222 | @@ -230,14 +239,29 @@ CRDs should be configured during replication prepare stage with repctl as descri | sidecarProxyImage | Image for csm-authorization-sidecar. | No | " " | | proxyHost | Hostname of the csm-authorization server. | No | Empty | | skipCertificateValidation | A boolean that enables/disables certificate validation of the csm-authorization server. | No | true | +| **migration** | [Migration](../../../../replication/migrating-volumes) is an optional feature to enable migration between storage classes | - | - | +| enabled | A boolean that enables/disables migration feature. | No | false | +| image | Image for dell-csi-migrator sidecar. | No | " " | +| migrationPrefix | enables migration sidecar to read required information from the storage class fields | No | migration.storage.dell.com | +| **replication** | [Replication](../../../../replication/deployment) is an optional feature to enable replication & disaster recovery capabilities of PowerMax to Kubernetes clusters.| - | - | +| enabled | A boolean that enables/disables replication feature. | No | false | +| image | Image for dell-csi-replicator sidecar. | No | " " | +| replicationContextPrefix | enables side cars to read required information from the volume context | No | powermax | +| replicationPrefix | Determine if replication is enabled | No | replication.storage.dell.com | 8. Install the driver using `csi-install.sh` bash script by running `cd ../dell-csi-helm-installer && ./csi-install.sh --namespace powermax --values ../helm/my-powermax-settings.yaml` +9. Or you can also install the driver using standalone helm chart using the command `helm install --values my-powermax-settings.yaml --namespace powermax powermax ./csi-powermax` *Note:* - For detailed instructions on how to run the install scripts, see the readme document in the dell-csi-helm-installer folder. - There are a set of samples provided [here](#sample-values-file) to help you configure the driver with reverse proxy - This script also runs the verify.sh script in the same directory. You will be prompted to enter the credentials for each of the Kubernetes nodes. The `verify.sh` script needs the credentials to check if the iSCSI initiators have been configured on all nodes. You can also skip the verification step by specifying the `--skip-verify-node` option - In order to enable authorization, there should be an authorization proxy server already installed. +- PowerMax Array username must have role as `StorageAdmin` to be able to perform CRUD operations. +- If the user is using complex K8s version like “v1.22.3-mirantis-1”, use below kubeVersion check in [helm Chart](https://github.com/dell/csi-powermax/blob/main/helm/csi-powermax/Chart.yaml) file. kubeVersion: “>= 1.22.0-0 < 1.25.0-0”. +- User should provide all boolean values with double-quotes. This applies only for values.yaml. Example: “true”/“false”. +- controllerCount parameter value should be <= number of nodes in the kubernetes cluster else install script fails. +- Endpoint should not have any special character at the end apart from port number. ## Storage Classes @@ -251,15 +275,15 @@ Upgrading from an older version of the driver: The storage classes will be delet ## Volume Snapshot Class -Starting with CSI PowerMax v1.7, `dell-csi-helm-installer` will not create any Volume Snapshot Class during the driver installation. There is a sample Volume Snapshot Class manifest present in the _samples/volumesnapshotclass_ folder. Please use this sample to create a new Volume Snapshot Class to create Volume Snapshots. +Starting with CSI PowerMax v1.7.0, `dell-csi-helm-installer` will not create any Volume Snapshot Class during the driver installation. There is a sample Volume Snapshot Class manifest present in the _samples/volumesnapshotclass_ folder. Please use this sample to create a new Volume Snapshot Class to create Volume Snapshots. ### What happens to my existing Volume Snapshot Classes? -*Upgrading from CSI PowerMax v2.1 driver*: +*Upgrading from CSI PowerMax v2.1.0 driver*: The existing volume snapshot class will be retained. *Upgrading from an older version of the driver*: -It is strongly recommended to upgrade the earlier versions of CSI PowerMax to 1.7 or higher, before upgrading to 2.2. +It is strongly recommended to upgrade the earlier versions of CSI PowerMax to 1.7.0 or higher, before upgrading to 2.3.0. ## Sample values file The following sections have useful snippets from `values.yaml` file which provides more information on how to configure the CSI PowerMax driver along with CSI PowerMax ReverseProxy in various modes diff --git a/content/v1/csidriver/installation/helm/powerstore.md b/content/v1/csidriver/installation/helm/powerstore.md index 7b009d83a4..858b0385db 100644 --- a/content/v1/csidriver/installation/helm/powerstore.md +++ b/content/v1/csidriver/installation/helm/powerstore.md @@ -62,18 +62,25 @@ To do this, run the `systemctl enable --now iscsid` command. For information about configuring iSCSI, see _Dell PowerStore documentation_ on Dell Support. -### Set up the NVMe/TCP Initiator +### Set up the NVMe Initiator -If you want to use the protocol, set up the NVMe/TCP initiators as follows: +If you want to use the protocol, set up the NVMe initiators as follows: - The driver requires NVMe management command-line interface (nvme-cli) to use configure, edit, view or start the NVMe client and target. The nvme-cli utility provides a command-line and interactive shell option. The NVMe CLI tool is installed in the host using the below command. `sudo apt install nvme-cli` +**Requirements for NVMeTCP** - Modules including the nvme, nvme_core, nvme_fabrics, and nvme_tcp are required for using NVMe over Fabrics using TCP. Load the NVMe and NVMe-OF Modules using the below commands: ```bash modprobe nvme modprobe nvme_tcp ``` +**Requirements for NVMeFC** +- NVMeFC Zoning of the Host Bus Adapters (HBAs) to the Fibre Channel port must be done. + +*NOTE:* +- Do not load the nvme_tcp module for NVMeFC + ### Linux multipathing requirements Dell PowerStore supports Linux multipathing. Configure Linux multipathing before installing the CSI Driver for Dell PowerStore. @@ -110,7 +117,21 @@ Use [v5.0.x](https://github.com/kubernetes-csi/external-snapshotter/tree/v5.0.1/ - [quay.io/k8scsi/csi-snapshotter:v4.0.x](https://quay.io/repository/k8scsi/csi-snapshotter?tag=v4.0.0&tab=tags) - The CSI external-snapshotter sidecar is still installed along with the driver and does not involve any extra configuration. -## Volume Health Monitoring +#### Installation example + +You can install CRDs and default snapshot controller by running these commands: +```bash +git clone https://github.com/kubernetes-csi/external-snapshotter/ +cd ./external-snapshotter +git checkout release- +kubectl kustomize client/config/crd | kubectl create -f - +kubectl -n kube-system kustomize deploy/kubernetes/snapshot-controller | kubectl create -f - +``` + +*NOTE:* +- It is recommended to use 5.0.x version of snapshotter/snapshot-controller. + +### Volume Health Monitoring Volume Health Monitoring feature is optional and by default this feature is disabled for drivers when installed via helm. To enable this feature, add the below block to the driver manifest before installing the driver. This ensures to install external @@ -142,21 +163,6 @@ node: enabled: false ``` -#### Installation example - -You can install CRDs and default snapshot controller by running following commands: -```bash -git clone https://github.com/kubernetes-csi/external-snapshotter/ -cd ./external-snapshotter -git checkout release- -kubectl kustomize client/config/crd | kubectl create -f - -kubectl -n kube-system kustomize deploy/kubernetes/snapshot-controller | kubectl create -f - -``` - -*NOTE:* -- It is recommended to use 5.0.x version of snapshotter/snapshot-controller. -- The CSI external-snapshotter sidecar is installed along with the driver and does not involve any extra configuration. - ### (Optional) Replication feature Requirements Applicable only if you decided to enable the Replication feature in `values.yaml` @@ -174,7 +180,7 @@ CRDs should be configured during replication prepare stage with repctl as descri ## Install the Driver **Steps** -1. Run `git clone -b v2.2.0 https://github.com/dell/csi-powerstore.git` to clone the git repository. +1. Run `git clone -b v2.3.0 https://github.com/dell/csi-powerstore.git` to clone the git repository. 2. Ensure that you have created namespace where you want to install the driver. You can run `kubectl create namespace csi-powerstore` to create a new one. "csi-powerstore" is just an example. You can choose any name for the namespace. But make sure to align to the same namespace during the whole installation. 3. Check `helm/csi-powerstore/driver-image.yaml` and confirm the driver image points to new image. @@ -184,16 +190,16 @@ CRDs should be configured during replication prepare stage with repctl as descri - *username*, *password*: defines credentials for connecting to array. - *skipCertificateValidation*: defines if we should use insecure connection or not. - *isDefault*: defines if we should treat the current array as a default. - - *blockProtocol*: defines what transport protocol we should use (FC, ISCSI, NVMeTCP, None, or auto). + - *blockProtocol*: defines what transport protocol we should use (FC, ISCSI, NVMeTCP, NVMeFC, None, or auto). - *nasName*: defines what NAS should be used for NFS volumes. - *nfsAcls* (Optional): defines permissions - POSIX mode bits or NFSv4 ACLs, to be set on NFS target mount directory. NFSv4 ACls are supported for NFSv4 shares on NFSv4 enabled NAS servers only. POSIX ACLs are not supported and only POSIX mode bits are supported for NFSv3 shares. Add more blocks similar to above for each PowerStore array if necessary. -5. Create storage classes using ones from `samples/storageclass` folder as an example and apply them to the Kubernetes cluster by running `kubectl create -f ` - +5. Create the secret by running ```kubectl create secret generic powerstore-config -n csi-powerstore --from-file=config=secret.yaml``` +6. Create storage classes using ones from `samples/storageclass` folder as an example and apply them to the Kubernetes cluster by running `kubectl create -f ` + > If you do not specify `arrayID` parameter in the storage class then the array that was specified as the default would be used for provisioning volumes. -6. Create the secret by running ```kubectl create secret generic powerstore-config -n csi-powerstore --from-file=config=secret.yaml``` 7. Copy the default values.yaml file `cd dell-csi-helm-installer && cp ../helm/csi-powerstore/values.yaml ./my-powerstore-settings.yaml` 8. Edit the newly created values file and provide values for the following parameters `vi my-powerstore-settings.yaml`: @@ -221,6 +227,7 @@ CRDs should be configured during replication prepare stage with repctl as descri | node.nodeSelector | Defines what nodes would be selected for pods of node daemonset | Yes | " " | | node.tolerations | Defines tolerations that would be applied to node daemonset | Yes | " " | | fsGroupPolicy | Defines which FS Group policy mode to be used, Supported modes `None, File and ReadWriteOnceWithFSType` | No | "ReadWriteOnceWithFSType" | +| controller.vgsnapshot.enabled | To enable or disable the volume group snapshot feature | No | "true" | 8. Install the driver using `csi-install.sh` bash script by running `./csi-install.sh --namespace csi-powerstore --values ./my-powerstore-settings.yaml` - After that the driver should be installed, you can check the condition of driver pods by running `kubectl get all -n csi-powerstore` @@ -257,7 +264,7 @@ There are samples storage class yaml files available under `samples/storageclass allowedTopologies: - matchLabelExpressions: - key: csi-powerstore.dellemc.com/12.34.56.78-iscsi -# replace "-iscsi" with "-fc", "-nvme" or "-nfs" at the end to use FC, NVMe or NFS enabled hosts +# replace "-iscsi" with "-fc", "-nvmetcp" or "-nvmefc" or "-nfs" at the end to use FC, NVMeTCP, NVMeFC or NFS enabled hosts # replace "12.34.56.78" with PowerStore endpoint IP values: - "true" @@ -272,15 +279,15 @@ kubectl create -f ## Volume Snapshot Class -Starting CSI PowerStore v1.4, `dell-csi-helm-installer` will not create any Volume Snapshot Class during the driver installation. There is a sample Volume Snapshot Class manifest present in the _samples/volumesnapshotclass_ folder. Please use this sample to create a new Volume Snapshot Class to create Volume Snapshots. +Starting CSI PowerStore v1.4.0, `dell-csi-helm-installer` will not create any Volume Snapshot Class during the driver installation. There is a sample Volume Snapshot Class manifest present in the _samples/volumesnapshotclass_ folder. Please use this sample to create a new Volume Snapshot Class to create Volume Snapshots. ### What happens to my existing Volume Snapshot Classes? -*Upgrading from CSI PowerStore v2.1 driver*: +*Upgrading from CSI PowerStore v2.1.0 driver*: The existing volume snapshot class will be retained. *Upgrading from an older version of the driver*: -It is strongly recommended to upgrade the earlier versions of CSI PowerStore to 1.4 or higher, before upgrading to 2.2. +It is strongly recommended to upgrade the earlier versions of CSI PowerStore to 1.4.0 or higher, before upgrading to 2.3.0. ## Dynamically update the powerstore secrets diff --git a/content/v1/csidriver/installation/helm/unity.md b/content/v1/csidriver/installation/helm/unity.md index 0db49246f5..38000db82b 100644 --- a/content/v1/csidriver/installation/helm/unity.md +++ b/content/v1/csidriver/installation/helm/unity.md @@ -1,14 +1,14 @@ --- -title: Unity +title: Unity XT description: > - Installing CSI Driver for Unity via Helm + Installing CSI Driver for Unity XT via Helm --- -The CSI Driver for Dell Unity can be deployed by using the provided Helm v3 charts and installation scripts on both Kubernetes and OpenShift platforms. For more detailed information on the installation scripts, review the script [documentation](https://github.com/dell/csi-unity/tree/master/dell-csi-helm-installer). +The CSI Driver for Dell Unity XT can be deployed by using the provided Helm v3 charts and installation scripts on both Kubernetes and OpenShift platforms. For more detailed information on the installation scripts, review the script [documentation](https://github.com/dell/csi-unity/tree/master/dell-csi-helm-installer). The controller section of the Helm chart installs the following components in a _Deployment_: -- CSI Driver for Unity +- CSI Driver for Unity XT - Kubernetes External Provisioner, which provisions the volumes - Kubernetes External Attacher, which attaches the volumes to the containers - Kubernetes External Snapshotter, which provides snapshot support @@ -17,29 +17,78 @@ The controller section of the Helm chart installs the following components in a The node section of the Helm chart installs the following component in a _DaemonSet_: -- CSI Driver for Unity +- CSI Driver for Unity XT - Kubernetes Node Registrar, which handles the driver registration ## Prerequisites -Before you install CSI Driver for Unity, verify the requirements that are mentioned in this topic are installed and configured. +Before you install CSI Driver for Unity XT, verify the requirements that are mentioned in this topic are installed and configured. ### Requirements * Install Kubernetes or OpenShift (see [supported versions](../../../../csidriver/#features-and-capabilities)) * Install Helm v3 -* To use FC protocol, the host must be zoned with Unity array and Multipath needs to be configured +* To use FC protocol, the host must be zoned with Unity XT array and Multipath needs to be configured * To use iSCSI protocol, iSCSI initiator utils packages needs to be installed and Multipath needs to be configured * To use NFS protocol, NFS utility packages needs to be installed * Mount propagation is enabled on container runtime that is being used +### Install Helm 3.0 + +Install Helm 3.0 on the master node before you install the CSI Driver for Dell Unity XT. + +**Steps** + +Run the `curl https://raw.githubusercontent.com/helm/helm/master/scripts/get-helm-3 | bash` command to install Helm 3.0. + + +### Fibre Channel requirements + +Dell Unity XT supports Fibre Channel communication. If you use the Fibre Channel protocol, ensure that the +following requirement is met before you install the CSI Driver for Dell Unity XT: +- Zoning of the Host Bus Adapters (HBAs) to the Fibre Channel port must be done. + + +### Set up the iSCSI Initiator +The CSI Driver for Dell Unity XT supports iSCSI connectivity. + +If you use the iSCSI protocol, set up the iSCSI initiators as follows: +- Ensure that the iSCSI initiators are available on both Controller and Worker nodes. +- Kubernetes nodes must have access (network connectivity) to an iSCSI port on the Dell Unity XT array that + has IP interfaces. Manually create IP routes for each node that connects to the Dell Unity XT. +- All Kubernetes nodes must have the _iscsi-initiator-utils_ package for CentOS/RHEL or _open-iscsi_ package for Ubuntu installed, and the _iscsid_ service must be enabled and running. + To do this, run the `systemctl enable --now iscsid` command. +- Ensure that the unique initiator name is set in _/etc/iscsi/initiatorname.iscsi_. + +For more information about configuring iSCSI, see [Dell Host Connectivity guide](https://www.delltechnologies.com/asset/zh-tw/products/storage/technical-support/docu5128.pdf). + +### Linux multipathing requirements +Dell Unity XT supports Linux multipathing. Configure Linux multipathing before installing the CSI Driver for Dell +Unity XT. + +Set up Linux multipathing as follows: +- Ensure that all nodes have the _Device Mapper Multipathing_ package installed. +> You can install it by running `yum install device-mapper-multipath` on CentOS or `apt install multipath-tools` on Ubuntu. This package should create a multipath configuration file located in `/etc/multipath.conf`. +- Enable multipathing using the `mpathconf --enable --with_multipathd y` command. +- Enable `user_friendly_names` and `find_multipaths` in the `multipath.conf` file. +- Ensure that the multipath command for `multipath.conf` is available on all Kubernetes nodes. + +As a best practice, use the following options to help the operating system and the mulitpathing software detect path changes efficiently: +```text +path_grouping_policy multibus +path_checker tur +features "1 queue_if_no_path" +path_selector "round-robin 0" +no_path_retry 10 +``` + ## Install CSI Driver -Install CSI Driver for Unity using this procedure. +Install CSI Driver for Unity XT using this procedure. *Before you begin* - * You must have the downloaded files, including the Helm chart from the source [git repository](https://github.com/dell/csi-unity) with the command ```git clone -b v2.2.0 https://github.com/dell/csi-unity.git```, as a pre-requisite for running this procedure. + * You must have the downloaded files, including the Helm chart from the source [git repository](https://github.com/dell/csi-unity) with the command ```git clone -b v2.3.0 https://github.com/dell/csi-unity.git```, as a pre-requisite for running this procedure. * In the top-level dell-csi-helm-installer directory, there should be two scripts, `csi-install.sh` and `csi-uninstall.sh`. * Ensure _unity_ namespace exists in Kubernetes cluster. Use the `kubectl create namespace unity` command to create the namespace if the namespace is not present. @@ -47,19 +96,19 @@ Install CSI Driver for Unity using this procedure. Procedure -1. Collect information from the Unity Systems like Unique ArrayId, IP address, username, and password. Make a note of the value for these parameters as they must be entered in the `secret.yaml` and `myvalues.yaml` file. +1. Collect information from the Unity XT Systems like Unique ArrayId, IP address, username, and password. Make a note of the value for these parameters as they must be entered in the `secret.yaml` and `myvalues.yaml` file. **Note**: - * ArrayId corresponds to the serial number of Unity array. - * Unity Array username must have role as Storage Administrator to be able to perform CRUD operations. + * ArrayId corresponds to the serial number of Unity XT array. + * Unity XT Array username must have role as Storage Administrator to be able to perform CRUD operations. * If the user is using complex K8s version like "v1.21.3-mirantis-1", use below kubeVersion check in helm/csi-unity/Chart.yaml file. - kubeVersion: ">= 1.21.0-0 < 1.24.0-0" + kubeVersion: ">= 1.21.0-0 < 1.25.0-0" 2. Copy the `helm/csi-unity/values.yaml` into a file named `myvalues.yaml` in the same directory of `csi-install.sh`, to customize settings for installation. 3. Edit `myvalues.yaml` to set the following parameters for your installation: - The following table lists the primary configurable parameters of the Unity driver chart and their default values. More detailed information can be found in the [`values.yaml`](https://github.com/dell/csi-unity/blob/master/helm/csi-unity/values.yaml) file in this repository. + The following table lists the primary configurable parameters of the Unity XT driver chart and their default values. More detailed information can be found in the [`values.yaml`](https://github.com/dell/csi-unity/blob/master/helm/csi-unity/values.yaml) file in this repository. | Parameter | Description | Required | Default | | --------- | ----------- | -------- |-------- | @@ -127,12 +176,12 @@ Procedure 5. Prepare the `secret.yaml` for driver configuration. The following table lists driver configuration parameters for multiple storage arrays. - | Parameter | Description | Required | Default | - | ------------------------- | ----------------------------------- | -------- |-------- | - | storageArrayList.username | Username for accessing Unity system | true | - | - | storageArrayList.password | Password for accessing Unity system | true | - | - | storageArrayList.endpoint | REST API gateway HTTPS endpoint Unity system| true | - | - | storageArrayList.arrayId | ArrayID for Unity system | true | - | + | Parameter | Description | Required | Default | + | ------------------------- | ---------------------------------------------- | -------- |-------- | + | storageArrayList.username | Username for accessing Unity XT system | true | - | + | storageArrayList.password | Password for accessing Unity XT system | true | - | + | storageArrayList.endpoint | REST API gateway HTTPS endpoint Unity XT system| true | - | + | storageArrayList.arrayId | ArrayID for Unity XT system | true | - | | storageArrayList.skipCertificateValidation | "skipCertificateValidation " determines if the driver is going to validate unisphere certs while connecting to the Unisphere REST API interface. If it is set to false, then a secret unity-certs has to be created with an X.509 certificate of CA which signed the Unisphere certificate. | true | true | | storageArrayList.isDefault| An array having isDefault=true or isDefaultArray=true will be considered as the default array when arrayId is not specified in the storage class. This parameter should occur only once in the list. | true | - | @@ -227,7 +276,7 @@ Procedure -7. Run the `./csi-install.sh --namespace unity --values ./myvalues.yaml` command to proceed with the installation. +7. Run the `./csi-install.sh --namespace unity --values ./myvalues.yaml` command to proceed with the installation using bash script. A successful installation must display messages that look similar to the following samples: ``` @@ -294,13 +343,27 @@ Procedure At the end of the script unity-controller Deployment and DaemonSet unity-node will be ready, execute command `kubectl get pods -n unity` to get the status of the pods and you will see the following: - * One or more Unity Controller (based on controllerCount) with 5/5 containers ready, and status displayed as Running. - * Agent pods with 2/2 containers and the status displayed as Running. - + * One or more Unity XT Controllers (based on controllerCount) with 5/5 containers ready, and status displayed as Running. + * Agent pods with 2/2 containers and the status displayed as Running. + + **Note**: + To install nightly or latest csi driver build using bash script use this command: + `/csi-install.sh --namespace unity --values ./myvalues.yaml --version nightly/latest` + +8. You can also install the driver using standalone helm chart by running helm install command, first using the --dry-run flag to + confirm various parameters are as desired. Once the parameters are validated, run the command without the --dry-run flag. + Note: This example assumes that the user is at repo root helm folder i.e csi-unity/helm. + + **Syntax**:`helm install --dry-run --values --namespace `
+ `` - namespace of the driver installation.
+ `` - unity in case of unity-creds and unity-certs-0 secrets.
+ `` - Path of the helm directory.
+ e.g: helm install --dry-run --values ./csi-unity/myvalues.yaml --namespace unity unity ./csi-unity + ## Certificate validation for Unisphere REST API calls -This topic provides details about setting up the certificate validation for the CSI Driver for Dell Unity. +This topic provides details about setting up the Dell Unity XT certificate validation for the CSI Driver. *Before you begin* @@ -334,15 +397,15 @@ If the Unisphere certificate is self-signed or if you are using an embedded Unis ## Volume Snapshot Class -For CSI Driver for Unity version 1.6 and later, `dell-csi-helm-installer` does not create any Volume Snapshot classes as part of the driver installation. A wide set of annotated storage class manifests have been provided in the `csi-unity/samples/volumesnapshotclass/` folder. Use these samples to create new Volume Snapshot to provision storage. +A wide set of annotated storage class manifests have been provided in the [csi-unity/samples/volumesnapshotclass/](https://github.com/dell/csi-unity/tree/main/samples/volumesnapshotclass) folder. Use these samples to create new Volume Snapshot to provision storage. ### What happens to my existing Volume Snapshot Classes? -*Upgrading from CSI Unity v2.1 driver*: +*Upgrading from CSI Unity XT v2.1.0 driver*: The existing volume snapshot class will be retained. *Upgrading from an older version of the driver*: -It is strongly recommended to upgrade the earlier versions of CSI Unity to 1.6 or higher, before upgrading to 2.2. +It is strongly recommended to upgrade the earlier versions of CSI Unity XT to v1.6.0 or higher, before upgrading to v2.3.0. ## Storage Classes @@ -350,7 +413,7 @@ Storage Classes are an essential Kubernetes construct for Storage provisioning. A wide set of annotated storage class manifests have been provided in the [samples/storageclass](https://github.com/dell/csi-unity/tree/master/samples/storageclass) folder. Use these samples to create new storage classes to provision storage. -For CSI Driver for Unity, a wide set of annotated storage class manifests have been provided in the `csi-unity/samples/storageclass` folder. Use these samples to create new storage classes to provision storage. +For the Unity XT CSI Driver, a wide set of annotated storage class manifests have been provided in the `csi-unity/samples/storageclass` folder. Use these samples to create new storage classes to provision storage. ### What happens to my existing storage classes? @@ -393,9 +456,7 @@ User can update secret using the following command: ``` **Note**: Updating unity-certs-x secrets is a manual process, unlike unity-creds. Users have to re-install the driver in case of updating/adding the SSL certificates or changing the certSecretCount parameter. -## Dynamic Logging Configuration - -This feature is introduced in CSI Driver for unity version 2.0.0. +## Dynamic Logging Configuration ### Helm based installation As part of driver installation, a ConfigMap with the name `unity-config-params` is created, which contains an attribute `CSI_LOG_LEVEL` which specifies the current log level of CSI driver. diff --git a/content/v1/csidriver/installation/offline/_index.md b/content/v1/csidriver/installation/offline/_index.md index 07b0000bdb..127d35c937 100644 --- a/content/v1/csidriver/installation/offline/_index.md +++ b/content/v1/csidriver/installation/offline/_index.md @@ -12,7 +12,7 @@ This includes the following drivers: * [PowerMax](https://github.com/dell/csi-powermax) * [PowerScale](https://github.com/dell/csi-powerscale) * [PowerStore](https://github.com/dell/csi-powerstore) -* [Unity](https://github.com/dell/csi-unity) +* [Unity XT](https://github.com/dell/csi-unity) As well as the Dell CSI Operator * [Dell CSI Operator](https://github.com/dell/dell-csi-operator) @@ -65,7 +65,7 @@ The resulting offline bundle file can be copied to another machine, if necessary For example, here is the output of a request to build an offline bundle for the Dell CSI Operator: ``` -git clone -b v1.7.0 https://github.com/dell/dell-csi-operator.git +git clone -b v1.8.0 https://github.com/dell/dell-csi-operator.git ``` ``` cd dell-csi-operator/scripts diff --git a/content/v1/csidriver/installation/operator/_index.md b/content/v1/csidriver/installation/operator/_index.md index be62fc2dec..68113a0e90 100644 --- a/content/v1/csidriver/installation/operator/_index.md +++ b/content/v1/csidriver/installation/operator/_index.md @@ -50,21 +50,21 @@ If you have installed an old version of the `dell-csi-operator` which was availa #### Full list of CSI Drivers and versions supported by the Dell CSI Operator | CSI Driver | Version | ConfigVersion | Kubernetes Version | OpenShift Version | | ------------------ | --------- | -------------- | -------------------- | --------------------- | -| CSI PowerMax | 2.0.0 | v2.0.0 | 1.20, 1.21, 1.22 | 4.6 EUS, 4.7, 4.8 | | CSI PowerMax | 2.1.0 | v2.1.0 | 1.20, 1.21, 1.22 | 4.8, 4.8 EUS, 4.9 | | CSI PowerMax | 2.2.0 | v2.2.0 | 1.21, 1.22, 1.23 | 4.8, 4.8 EUS, 4.9 | -| CSI PowerFlex | 2.0.0 | v2.0.0 | 1.20, 1.21, 1.22 | 4.6 EUS, 4.7, 4.8 | +| CSI PowerMax | 2.3.0 | v2.3.0 | 1.22, 1.23, 1.24 | 4.9, 4.10, 4.10 EUS | | CSI PowerFlex | 2.1.0 | v2.1.0 | 1.20, 1.21, 1.22 | 4.8, 4.8 EUS, 4.9 | | CSI PowerFlex | 2.2.0 | v2.2.0 | 1.21, 1.22, 1.23 | 4.8, 4.8 EUS, 4.9 | -| CSI PowerScale | 2.0.0 | v2.0.0 | 1.20, 1.21, 1.22 | 4.6 EUS, 4.7, 4.8 | +| CSI PowerFlex | 2.3.0 | v2.3.0 | 1.22, 1.23, 1.24 | 4.9, 4.10, 4.10 EUS | | CSI PowerScale | 2.1.0 | v2.1.0 | 1.20, 1.21, 1.22 | 4.8, 4.8 EUS, 4.9 | | CSI PowerScale | 2.2.0 | v2.2.0 | 1.21, 1.22, 1.23 | 4.8, 4.8 EUS, 4.9 | -| CSI Unity | 2.0.0 | v2.0.0 | 1.20, 1.21, 1.22 | 4.6 EUS, 4.7, 4.8 | -| CSI Unity | 2.1.0 | v2.1.0 | 1.20, 1.21, 1.22 | 4.8, 4.8 EUS, 4.9 | -| CSI Unity | 2.2.0 | v2.2.0 | 1.21, 1.22, 1.23 | 4.8, 4.8 EUS, 4.9 | -| CSI PowerStore | 2.0.0 | v2.0.0 | 1.20, 1.21, 1.22 | 4.6 EUS, 4.7, 4.8 | +| CSI PowerScale | 2.3.0 | v2.3.0 | 1.22, 1.23, 1.24 | 4.9, 4.10, 4.10 EUS | +| CSI Unity XT | 2.1.0 | v2.1.0 | 1.20, 1.21, 1.22 | 4.8, 4.8 EUS, 4.9 | +| CSI Unity XT | 2.2.0 | v2.2.0 | 1.21, 1.22, 1.23 | 4.8, 4.8 EUS, 4.9 | +| CSI Unity XT | 2.3.0 | v2.3.0 | 1.22, 1.23, 1.24 | 4.9, 4.10, 4.10 EUS | | CSI PowerStore | 2.1.0 | v2.1.0 | 1.20, 1.21, 1.22 | 4.8, 4.8 EUS, 4.9 | | CSI PowerStore | 2.2.0 | v2.2.0 | 1.21, 1.22, 1.23 | 4.8, 4.8 EUS, 4.9 | +| CSI PowerStore | 2.3.0 | v2.3.0 | 1.22, 1.23, 1.24 | 4.9, 4.10, 4.10 EUS |
@@ -97,7 +97,7 @@ $ kubectl create configmap dell-csi-operator-config --from-file config.tar.gz -n #### Steps >**Skip step 1 for "offline bundle installation" and continue using the workspace created by untar of dell-csi-operator-bundle.tar.gz.** -1. Clone and checkout the required dell-csi-operator version using `git clone -b v1.7.0 https://github.com/dell/dell-csi-operator.git`. +1. Clone and checkout the required dell-csi-operator version using `git clone -b v1.8.0 https://github.com/dell/dell-csi-operator.git`. 2. cd dell-csi-operator 3. Run `bash scripts/install.sh` to install the operator. >NOTE: Dell CSI Operator version 1.4.0 and higher would install to the 'dell-csi-operator' namespace by default. @@ -126,7 +126,7 @@ For installation of the supported drivers, a `CustomResource` has to be created ### Pre-requisites for upstream Kubernetes Clusters On upstream Kubernetes clusters, make sure to install * VolumeSnapshot CRDs - * On clusters running v1.21,v1.22 & v1.23, make sure to install v1 VolumeSnapshot CRDs + * On clusters running v1.22,v1.23 & v1.24, make sure to install v1 VolumeSnapshot CRDs * External Volume Snapshot Controller with the correct version ### Pre-requisites for Red Hat OpenShift Clusters @@ -144,7 +144,7 @@ metadata: spec: config: ignition: - version: 2.2.0 + version: 3.2.0 systemd: units: - name: "iscsid.service" @@ -187,7 +187,7 @@ metadata: spec: config: ignition: - version: 2.2.0 + version: 3.2.0 storage: files: - contents: @@ -257,9 +257,9 @@ If you are installing the latest versions of the CSI drivers, the driver control The CSI Drivers installed by the Dell CSI Operator can be updated like any Kubernetes resource. This can be achieved in various ways which include – * Modifying the installation directly via `kubectl edit` - For e.g. - If the name of the installed unity driver is unity, then run + For example - If the name of the installed Unity XT driver is unity, then run ``` - # Replace driver-namespace with the namespace where the Unity driver is installed + # Replace driver-namespace with the namespace where the Unity XT driver is installed $ kubectl edit csiunity/unity -n ``` and modify the installation. The usual fields to edit are the version of drivers and sidecars and the env variables. @@ -274,7 +274,7 @@ The below notes explain some of the general items to take care of. 1. If you are trying to upgrade the CSI driver from an older version, make sure to modify the _configVersion_ field if required. ```yaml driver: - configVersion: v2.2.0 + configVersion: v2.3.0 ``` 2. Volume Health Monitoring feature is optional and by default this feature is disabled for drivers when installed via operator. To enable this feature, we will have to modify the below block while upgrading the driver.To get the volume health state add @@ -308,13 +308,13 @@ The below notes explain some of the general items to take care of. name: snapshotter - args: - --monitor-interval=60s - image: gcr.io/k8s-staging-sig-storage/csi-external-health-monitor-controller:v0.4.0 + image: gcr.io/k8s-staging-sig-storage/csi-external-health-monitor-controller:v0.5.0 imagePullPolicy: IfNotPresent name: external-health-monitor - image: k8s.gcr.io/sig-storage/csi-attacher:v3.4.0 imagePullPolicy: IfNotPresent name: attacher - - image: k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.5.0 + - image: k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.5.1 imagePullPolicy: IfNotPresent name: registrar - image: k8s.gcr.io/sig-storage/csi-resizer:v1.4.0 @@ -348,7 +348,7 @@ data: * Adding (supported) environment variables * Updating the image of the driver ## Limitations -* The Dell CSI Operator can't manage any existing driver installed using Helm charts. If you already have installed one of the DellEMC CSI driver in your cluster and want to use the operator based deployment, uninstall the driver and then redeploy the driver following the installation procedure described above +* The Dell CSI Operator can't manage any existing driver installed using Helm charts. If you already have installed one of the Dell CSI drivers in your cluster and want to use the operator based deployment, uninstall the driver and then redeploy the driver following the installation procedure described. * The Dell CSI Operator is not fully compliant with the OperatorHub React UI elements and some of the Custom Resource fields may show up as invalid or unsupported in the OperatorHub GUI. To get around this problem, use kubectl/oc commands to get details about the Custom Resource(CR). This issue will be fixed in the upcoming releases of the Dell CSI Operator diff --git a/content/v1/csidriver/installation/operator/isilon.md b/content/v1/csidriver/installation/operator/isilon.md index 00e4c69924..6b5fcef159 100644 --- a/content/v1/csidriver/installation/operator/isilon.md +++ b/content/v1/csidriver/installation/operator/isilon.md @@ -116,6 +116,7 @@ User can query for CSI-PowerScale driver using the following command: | --------- | ----------- | -------- |-------- | | dnsPolicy | Determines the DNS Policy of the Node service | Yes | ClusterFirstWithHostNet | | fsGroupPolicy | Defines which FS Group policy mode to be used, Supported modes `None, File and ReadWriteOnceWithFSType` | No | "ReadWriteOnceWithFSType" | + | X_CSI_MAX_PATH_LIMIT | Defines the maximum length of path for a volume | No | 192 | | ***Common parameters for node and controller*** | | CSI_ENDPOINT | The UNIX socket address for handling gRPC calls | No | /var/run/csi/csi.sock | | X_CSI_ISI_SKIP_CERTIFICATE_VALIDATION | Specifies whether SSL security needs to be enabled for communication between PowerScale and CSI Driver | No | true | @@ -150,7 +151,7 @@ User can query for CSI-PowerScale driver using the following command: 3. Also, snapshotter and resizer sidecars are not optional to choose, it comes default with Driver installation. ## Volume Health Monitoring -This feature is introduced in CSI Driver for unity version 2.1.0. +This feature is introduced in CSI Driver for PowerScale version 2.1.0. ### Operator based installation diff --git a/content/v1/csidriver/installation/operator/powerflex.md b/content/v1/csidriver/installation/operator/powerflex.md index ea959f4639..73350f7aa5 100644 --- a/content/v1/csidriver/installation/operator/powerflex.md +++ b/content/v1/csidriver/installation/operator/powerflex.md @@ -14,6 +14,7 @@ There are sample manifests provided which can be edited to do an easy installati Kubernetes Operators make it easy to deploy and manage the entire lifecycle of complex Kubernetes applications. Operators use Custom Resource Definitions (CRD) which represents the application and use custom controllers to manage them. ### Prerequisites: +- If multipath is configured, ensure CSI-PowerFlex volumes are blacklisted by multipathd. See [troubleshooting section](../../../troubleshooting/powerflex.md) for details #### SDC Deployment for Operator - This feature deploys the sdc kernel modules on all nodes with the help of an init container. - For non-supported versions of the OS also do the manual SDC deployment steps given below. Refer to https://hub.docker.com/r/dellemc/sdc for supported versions. @@ -144,6 +145,7 @@ For detailed PowerFlex installation procedure, see the _Dell PowerFlex Deploymen | Parameter | Description | Required | Default | | --------- | ----------- | -------- |-------- | | replicas | Controls the number of controller pods you deploy. If the number of controller pods is greater than the number of available nodes, excess pods will become stay in a pending state. Defaults are 2 which allows for Controller high availability. | Yes | 2 | + | fsGroupPolicy | Defines which FS Group policy mode to be used, Supported modes `None, File and ReadWriteOnceWithFSType` | No | "ReadWriteOnceWithFSType" | | ***Common parameters for node and controller*** | | X_CSI_VXFLEXOS_ENABLELISTVOLUMESNAPSHOT | Enable list volume operation to include snapshots (since creating a volume from a snap actually results in a new snap) | No | false | | X_CSI_VXFLEXOS_ENABLESNAPSHOTCGDELETE | Enable this to automatically delete all snapshots in a consistency group when a snap in the group is deleted | No | false | @@ -151,20 +153,21 @@ For detailed PowerFlex installation procedure, see the _Dell PowerFlex Deploymen | X_CSI_ALLOW_RWO_MULTI_POD_ACCESS | Setting allowRWOMultiPodAccess to "true" will allow multiple pods on the same node to access the same RWO volume. This behavior conflicts with the CSI specification version 1.3. NodePublishVolume description that requires an error to be returned in this case. However, some other CSI drivers support this behavior and some customers desire this behavior. Customers use this option at their own risk. | No | false | 5. Execute the `kubectl create -f ` command to create PowerFlex custom resource. This command will deploy the CSI-PowerFlex driver. - Example CR for PowerFlex Driver - ```yaml - apiVersion: storage.dell.com/v1 +```yaml +apiVersion: storage.dell.com/v1 kind: CSIVXFlexOS metadata: name: test-vxflexos namespace: test-vxflexos spec: driver: - configVersion: v2.2.0 + configVersion: v2.3.0 replicas: 1 dnsPolicy: ClusterFirstWithHostNet forceUpdate: false + fsGroupPolicy: File common: - image: "dellemc/csi-vxflexos:v2.2.0" + image: "dellemc/csi-vxflexos:v2.3.0" imagePullPolicy: IfNotPresent envs: - name: X_CSI_VXFLEXOS_ENABLELISTVOLUMESNAPSHOT diff --git a/content/v1/csidriver/installation/operator/powermax.md b/content/v1/csidriver/installation/operator/powermax.md index 781eb18fe7..7c1e13c246 100644 --- a/content/v1/csidriver/installation/operator/powermax.md +++ b/content/v1/csidriver/installation/operator/powermax.md @@ -16,6 +16,27 @@ Kubernetes Operators make it easy to deploy and manage the entire lifecycle of c ### Prerequisite +#### Fibre Channel Requirements + +CSI Driver for Dell PowerMax supports Fibre Channel communication. Ensure that the following requirements are met before you install CSI Driver: +- Zoning of the Host Bus Adapters (HBAs) to the Fibre Channel port director must be completed. +- Ensure that the HBA WWNs (initiators) appear on the list of initiators that are logged into the array. +- If the number of volumes that will be published to nodes is high, then configure the maximum number of LUNs for your HBAs on each node. See the appropriate HBA document to configure the maximum number of LUNs. + +#### iSCSI Requirements + +The CSI Driver for Dell PowerMax supports iSCSI connectivity. These requirements are applicable for the nodes that use iSCSI initiator to connect to the PowerMax arrays. + +Set up the iSCSI initiators as follows: +- All Kubernetes nodes must have the _iscsi-initiator-utils_ package installed. +- Ensure that the iSCSI initiators are available on all the nodes where the driver node plugin will be installed. +- Kubernetes nodes should have access (network connectivity) to an iSCSI director on the Dell PowerMax array that has IP interfaces. Manually create IP routes for each node that connects to the Dell PowerMax if required. +- Ensure that the iSCSI initiators on the nodes are not a part of any existing Host (Initiator Group) on the Dell PowerMax array. +- The CSI Driver needs the port group names containing the required iSCSI director ports. These port groups must be set up on each Dell PowerMax array. All the port group names supplied to the driver must exist on each Dell PowerMax with the same name. + +For more information about configuring iSCSI, see [Dell Host Connectivity guide](https://www.delltechnologies.com/asset/zh-tw/products/storage/technical-support/docu5128.pdf). + + #### Create secret for client-side TLS verification (Optional) Create a secret named powermax-certs in the namespace where the CSI PowerMax driver will be installed. This is an optional step and is only required if you are setting the env variable X_CSI_POWERMAX_SKIP_CERTIFICATE_VALIDATION to false. See the detailed documentation on how to create this secret [here](../../helm/powermax#certificate-validation-for-unisphere-rest-api-calls). @@ -57,6 +78,7 @@ Create a secret named powermax-certs in the namespace where the CSI PowerMax dri | Parameter | Description | Required | Default | | --------- | ----------- | -------- |-------- | | replicas | Controls the number of controller Pods you deploy. If controller Pods are greater than the number of available nodes, excess Pods will become stuck in pending. The default is 2 which allows for Controller high availability. | Yes | 2 | + | fsGroupPolicy | Defines which FS Group policy mode to be used, Supported modes `None, File and ReadWriteOnceWithFSType` | No | "ReadWriteOnceWithFSType" | | ***Common parameters for node and controller*** | | X_CSI_K8S_CLUSTER_PREFIX | Define a prefix that is appended to all resources created in the array; unique per K8s/CSI deployment; max length - 3 characters | Yes | XYZ | | X_CSI_POWERMAX_ENDPOINT | IP address of the Unisphere for PowerMax | Yes | https://0.0.0.0:8443 | @@ -65,12 +87,56 @@ Create a secret named powermax-certs in the namespace where the CSI PowerMax dri | X_CSI_MANAGED_ARRAYS | List of comma-separated array ID(s) which will be managed by the driver | Yes | - | | X_CSI_POWERMAX_PROXY_SERVICE_NAME | Name of CSI PowerMax ReverseProxy service. Leave blank if not using reverse proxy | No | - | | X_CSI_GRPC_MAX_THREADS | Number of concurrent grpc requests allowed per client | No | 4 | + | X_CSI_IG_MODIFY_HOSTNAME | Change any existing host names. When nodenametemplate is set, it changes the name to the specified format else it uses driver default host name format. | No | false | + | X_CSI_IG_NODENAME_TEMPLATE | Provide a template for the CSI driver to use while creating the Host/IG on the array for the nodes in the cluster. It is of the format a-b-c-%foo%-xyz where foo will be replaced by host name of each node in the cluster. | No | - | | X_CSI_POWERMAX_DRIVER_NAME | Set custom CSI driver name. For more details on this feature see the related [documentation](../../../features/powermax/#custom-driver-name) | No | - | | X_CSI_HEALTH_MONITOR_ENABLED | Enable/Disable health monitor of CSI volumes from Controller and Node plugin. Provides details of volume status, usage and volume condition. As a prerequisite, external-health-monitor sidecar section should be uncommented in samples which would install the sidecar | No | false | | ***Node parameters***| | X_CSI_POWERMAX_ISCSI_ENABLE_CHAP | Enable ISCSI CHAP authentication. For more details on this feature see the related [documentation](../../../features/powermax/#iscsi-chap) | No | false | + | X_CSI_TOPOLOGY_CONTROL_ENABLED | Enable/Disabe topology control. It filters out arrays, associated transport protocol available to each node and creates topology keys based on any such user input. | No | false | 5. Execute the following command to create the PowerMax custom resource:`kubectl create -f `. The above command will deploy the CSI-PowerMax driver. +**Note** - If CSI driver is getting installed using OCP UI , create these two configmaps manually using the command `oc create -f ` +1. Configmap name powermax-config-params + ```yaml + apiVersion: v1 + kind: ConfigMap + metadata: + name: powermax-config-params + namespace: test-powermax + data: + driver-config-params.yaml: | + CSI_LOG_LEVEL: "debug" + CSI_LOG_FORMAT: "JSON" + ``` + 2. Configmap name node-topology-config + ```yaml + kind: ConfigMap + metadata: + name: node-topology-config + namespace: test-powermax + data: + topologyConfig.yaml: | + allowedConnections: + - nodeName: "node1" + rules: + - "000000000001:FC" + - "000000000002:FC" + - nodeName: "*" + rules: + - "000000000002:FC" + deniedConnections: + - nodeName: "node2" + rules: + - "000000000002:*" + - nodeName: "node3" + rules: + - "*:*" + + ``` + + + ### CSI PowerMax ReverseProxy CSI PowerMax ReverseProxy is an optional component that can be installed with the CSI PowerMax driver. For more details on this feature see the related [documentation](../../../features/powermax#csi-powermax-reverse-proxy). @@ -113,7 +179,7 @@ metadata: namespace: test-powermax # <- Set the namespace to where you will install the CSI PowerMax driver spec: # Image for CSI PowerMax ReverseProxy - image: dellemc/csipowermax-reverseproxy:v1.4.0 # <- CSI PowerMax Reverse Proxy image + image: dellemc/csipowermax-reverseproxy:v2.1.0 # <- CSI PowerMax Reverse Proxy image imagePullPolicy: Always # TLS secret which contains SSL certificate and private key for the Reverse Proxy server tlsSecret: csirevproxy-tls-secret @@ -199,8 +265,8 @@ metadata: namespace: test-powermax spec: driver: - # Config version for CSI PowerMax v2.2.0 driver - configVersion: v2.2.0 + # Config version for CSI PowerMax v2.3.0 driver + configVersion: v2.3.0 # replica: Define the number of PowerMax controller nodes # to deploy to the Kubernetes release # Allowed values: n, where n > 0 @@ -209,8 +275,8 @@ spec: dnsPolicy: ClusterFirstWithHostNet forceUpdate: false common: - # Image for CSI PowerMax driver v2.2.0 - image: dellemc/csi-powermax:v2.2.0 + # Image for CSI PowerMax driver v2.3.0 + image: dellemc/csi-powermax:v2.3.0 # imagePullPolicy: Policy to determine if the image should be pulled prior to starting the container. # Allowed values: # Always: Always pull the image. @@ -304,6 +370,14 @@ spec: # Default value: false - name: X_CSI_HEALTH_MONITOR_ENABLED value: "false" + # X_CSI_TOPOLOGY_CONTROL_ENABLED provides a way to filter topology keys on a node based on array and transport protocol + # if enabled, user can create custom topology keys by editing node-topology-config configmap. + # Allowed values: + # true: enable the filtration based on config map + # false: disable the filtration based on config map + # Default value: false + - name: X_CSI_TOPOLOGY_CONTROL_ENABLED + value: "false" --- apiVersion: v1 kind: ConfigMap @@ -314,13 +388,57 @@ data: driver-config-params.yaml: | CSI_LOG_LEVEL: "debug" CSI_LOG_FORMAT: "JSON" - +--- +apiVersion: v1 +kind: ConfigMap +metadata: + name: node-topology-config + namespace: test-powermax +data: + topologyConfig.yaml: | + # allowedConnections contains a list of (node, array and protocol) info for user allowed configuration + # For any given storage array ID and protocol on a Node, topology keys will be created for just those pair and + # every other configuration is ignored + # Please refer to the doc website about a detailed explanation of each configuration parameter + # and the various possible inputs + allowedConnections: + # nodeName: Name of the node on which user wants to apply given rules + # Allowed values: + # nodeName - name of a specific node + # * - all the nodes + # Examples: "node1", "*" + - nodeName: "node1" + # rules is a list of 'StorageArrayID:TransportProtocol' pair. ':' is required between both value + # Allowed values: + # StorageArrayID: + # - SymmetrixID : for specific storage array + # - "*" :- for all the arrays connected to the node + # TransportProtocol: + # - FC : Fibre Channel protocol + # - ISCSI : iSCSI protocol + # - "*" - for all the possible Transport Protocol + # Examples: "000000000001:FC", "000000000002:*", "*:FC", "*:*" + rules: + - "000000000001:FC" + - "000000000002:FC" + - nodeName: "*" + rules: + - "000000000002:FC" + # deniedConnections contains a list of (node, array and protocol) info for denied configurations by user + # For any given storage array ID and protocol on a Node, topology keys will be created for every other configuration but + # not these input pairs + deniedConnections: + - nodeName: "node2" + rules: + - "000000000002:*" + - nodeName: "node3" + rules: + - "*:*" ``` Note: - - `dell-csi-operator` does not support the installation of CSI PowerMax ReverseProxy as a sidecar to the controller Pod. This facility is - only present with `dell-csi-helm-installer`. + - `dell-csi-operator` does not support the installation of CSI PowerMax ReverseProxy as a sidecar to the controller Pod. This facility is only present with `dell-csi-helm-installer`. - `Kubelet config dir path` is not yet configurable in case of Operator based driver installation. - Also, snapshotter and resizer sidecars are not optional to choose, it comes default with Driver installation. @@ -332,7 +450,7 @@ Volume Health Monitoring feature is optional and by default this feature is disa To enable this feature, set `X_CSI_HEALTH_MONITOR_ENABLED` to `true` in the driver manifest under controller and node section. Also, install the `external-health-monitor` from `sideCars` section for controller plugin. To get the volume health state `value` under controller should be set to true as seen below. To get the volume stats `value` under node should be set to true. - +``` # Install the 'external-health-monitor' sidecar accordingly. # Allowed values: # true: enable checking of health condition of CSI volumes @@ -351,4 +469,40 @@ To get the volume health state `value` under controller should be set to true as # Default value: false - name: X_CSI_HEALTH_MONITOR_ENABLED value: "true" -``` \ No newline at end of file +``` + +## Support for custom topology keys + +This feature is introduced in CSI Driver for PowerMax version 2.3.0. + +### Operator based installation + +Support for custom topology keys is optional and by default this feature is disabled for drivers when installed via operator. + +X_CSI_TOPOLOGY_CONTROL_ENABLED provides a way to filter topology keys on a node based on array and transport protocol. If enabled, user can create custom topology keys by editing node-topology-config configmap. + +1. To enable this feature, set `X_CSI_TOPOLOGY_CONTROL_ENABLED` to `true` in the driver manifest under node section. + +``` + # X_CSI_TOPOLOGY_CONTROL_ENABLED provides a way to filter topology keys on a node based on array and transport protocol + # if enabled, user can create custom topology keys by editing node-topology-config configmap. + # Allowed values: + # true: enable the filtration based on config map + # false: disable the filtration based on config map + # Default value: false + - name: X_CSI_TOPOLOGY_CONTROL_ENABLED + value: "false" +``` +2. Edit the sample config map "node-topology-config" present in [sample CRD](#sample--crd-file-for--powermax) with appropriate values: + + | Parameter | Description | + |-----------|--------------| + | allowedConnections | List of node, array and protocol info for user allowed configuration | + | allowedConnections.nodeName | Name of the node on which user wants to apply given rules | + | allowedConnections.rules | List of StorageArrayID:TransportProtocol pair | + | deniedConnections | List of node, array and protocol info for user denied configuration | + | deniedConnections.nodeName | Name of the node on which user wants to apply given rules | + | deniedConnections.rules | List of StorageArrayID:TransportProtocol pair | +
+ + >Note: Name of the configmap should always be `node-topology-config`. diff --git a/content/v1/csidriver/installation/operator/powerstore.md b/content/v1/csidriver/installation/operator/powerstore.md index ae60025943..d2b74a2896 100644 --- a/content/v1/csidriver/installation/operator/powerstore.md +++ b/content/v1/csidriver/installation/operator/powerstore.md @@ -30,7 +30,7 @@ Kubernetes Operators make it easy to deploy and manage the entire lifecycle of c password: "password" # password for connecting to API skipCertificateValidation: true # indicates if client side validation of (management)server's certificate can be skipped isDefault: true # treat current array as a default (would be used by storage classes without arrayID parameter) - blockProtocol: "auto" # what SCSI transport protocol use on node side (FC, ISCSI, NVMeTCP, None, or auto) + blockProtocol: "auto" # what SCSI transport protocol use on node side (FC, ISCSI, NVMeTCP, NVMeFC, None, or auto) nasName: "nas-server" # what NAS should be used for NFS volumes nfsAcls: "0777" # (Optional) defines permissions - POSIX mode bits or NFSv4 ACLs, to be set on NFS target mount directory. # NFSv4 ACls are supported for NFSv4 shares on NFSv4 enabled NAS servers only. POSIX ACLs are not supported and only POSIX mode bits are supported for NFSv3 shares. @@ -69,13 +69,13 @@ metadata: namespace: test-powerstore spec: driver: - configVersion: v2.2.0 + configVersion: v2.3.0 replicas: 2 dnsPolicy: ClusterFirstWithHostNet forceUpdate: false fsGroupPolicy: ReadWriteOnceWithFSType common: - image: "dellemc/csi-powerstore:v2.2.0" + image: "dellemc/csi-powerstore:v2.3.0" imagePullPolicy: IfNotPresent envs: - name: X_CSI_POWERSTORE_NODE_NAME_PREFIX @@ -139,6 +139,7 @@ data: | X_CSI_NFS_ACLS | Defines permissions - POSIX mode bits or NFSv4 ACLs, to be set on NFS target mount directory. | No | "0777" | | ***Node parameters*** | | X_CSI_POWERSTORE_ENABLE_CHAP | Set to true if you want to enable iSCSI CHAP feature | No | false | + 6. Execute the following command to create PowerStore custom resource:`kubectl create -f `. The above command will deploy the CSI-PowerStore driver. - After that the driver should be installed, you can check the condition of driver pods by running `kubectl get all -n ` @@ -177,7 +178,7 @@ volume stats value under node should be set to true. ## Dynamic Logging Configuration -This feature is introduced in CSI Driver for unity version 2.0.0. +This feature is introduced in CSI Driver for PowerStore version 2.0.0. ### Operator based installation As part of driver installation, a ConfigMap with the name `powerstore-config-params` is created using the manifest located in the sample file. This ConfigMap contains attributes `CSI_LOG_LEVEL` which specifies the current log level of the CSI driver and `CSI_LOG_FORMAT` which specifies the current log format of the CSI driver. To set the default/initial log level user can set this field during driver installation. diff --git a/content/v1/csidriver/installation/operator/unity.md b/content/v1/csidriver/installation/operator/unity.md index 93c0bb0f2f..89e8b9a699 100644 --- a/content/v1/csidriver/installation/operator/unity.md +++ b/content/v1/csidriver/installation/operator/unity.md @@ -1,24 +1,24 @@ --- -title: Unity +title: Unity XT description: > - Installing CSI Driver for Unity via Operator + Installing CSI Driver for Unity XT via Operator --- -## CSI Driver for Unity +## CSI Driver for Unity XT ### Pre-requisites -#### Create secret to store Unity credentials +#### Create secret to store Unity XT credentials Create a namespace called unity (it can be any user-defined name; But commands in this section assumes that the namespace is unity) Prepare the secret.yaml for driver configuration. The following table lists driver configuration parameters for multiple storage arrays. | Parameter | Description | Required | Default | | --------- | ----------- | -------- |-------- | -| username | Username for accessing Unity system | true | - | -| password | Password for accessing Unity system | true | - | -| restGateway | REST API gateway HTTPS endpoint Unity system| true | - | -| arrayId | ArrayID for Unity system | true | - | +| username | Username for accessing Unity XT system | true | - | +| password | Password for accessing Unity XT system | true | - | +| restGateway | REST API gateway HTTPS endpoint Unity XT system| true | - | +| arrayId | ArrayID for Unity XT system | true | - | | isDefaultArray | An array having isDefaultArray=true is for backward compatibility. This parameter should occur once in the list. | true | - | Ex: secret.yaml @@ -73,21 +73,21 @@ Execute command: ```kubectl create -f empty-secret.yaml``` Users should configure the parameters in CR. The following table lists the primary configurable parameters of the Unity driver and their default values: - | Parameter | Description | Required | Default | - | ----------------------------------------------- | ------------------------------------------------------------ | -------- | --------------------- | - | ***Common parameters for node and controller*** | | | | - | CSI_ENDPOINT | Specifies the HTTP endpoint for Unity. | No | /var/run/csi/csi.sock | - | X_CSI_UNITY_ALLOW_MULTI_POD_ACCESS | Flag to enable multiple pods use the same pvc on the same node with RWO access mode | No | false | - | ***Controller parameters*** | | | | - | X_CSI_MODE | Driver starting mode | No | controller | - | X_CSI_UNITY_AUTOPROBE | To enable auto probing for driver | No | true | - | X_CSI_HEALTH_MONITOR_ENABLED | Enable/Disable health monitor of CSI volumes from Controller plugin | No | | - | ***Node parameters*** | | | | - | X_CSI_MODE | Driver starting mode | No | node | - | X_CSI_ISCSI_CHROOT | Path to which the driver will chroot before running any iscsi commands. | No | /noderoot | - | X_CSI_HEALTH_MONITOR_ENABLED | Enable/Disable health monitor of CSI volumes from Node plugin | No | | | - -### Example CR for Unity + | Parameter | Description | Required | Default | + | ----------------------------------------------- | --------------------------------------------------------------------------- | -------- | --------------------- | + | ***Common parameters for node and controller*** | | | | + | CSI_ENDPOINT | Specifies the HTTP endpoint for Unity XT. | No | /var/run/csi/csi.sock | + | X_CSI_UNITY_ALLOW_MULTI_POD_ACCESS | Flag to enable multiple pods use same pvc on same node with RWO access mode | No | false | + | ***Controller parameters*** | | | | + | X_CSI_MODE | Driver starting mode | No | controller | + | X_CSI_UNITY_AUTOPROBE | To enable auto probing for driver | No | true | + | X_CSI_HEALTH_MONITOR_ENABLED | Enable/Disable health monitor of CSI volumes from Controller plugin | No | | + | ***Node parameters*** | | | | + | X_CSI_MODE | Driver starting mode | No | node | + | X_CSI_ISCSI_CHROOT | Path to which the driver will chroot before running any iscsi commands | No | /noderoot | + | X_CSI_HEALTH_MONITOR_ENABLED | Enable/Disable health monitor of CSI volumes from Node plugin | No | | + +### Example CR for Unity XT Refer samples from [here](https://github.com/dell/dell-csi-operator/tree/master/samples). Below is an example CR: ```yaml apiVersion: storage.dell.com/v1 @@ -97,12 +97,12 @@ metadata: namespace: test-unity spec: driver: - configVersion: v2.2.0 + configVersion: v2.3.0 replicas: 2 dnsPolicy: ClusterFirstWithHostNet forceUpdate: false common: - image: "dellemc/csi-unity:v2.2.0" + image: "dellemc/csi-unity:v2.3.0" imagePullPolicy: IfNotPresent sideCars: - name: provisioner @@ -115,8 +115,8 @@ spec: controller: envs: - # X_CSI_ENABLE_VOL_HEALTH_MONITOR: Enable/Disable health monitor of CSI volumes from Controller plugin. Provides details of volume status and volume condition. - # As a prerequisite, external-health-monitor sidecar section should be uncommented in samples which would install the sidecar + # X_CSI_HEALTH_MONITOR_ENABLED: Enable/Disable health monitor of CSI volumes from Controller plugin - volume condition. + # Install the 'external-health-monitor' sidecar accordingly. # Allowed values: # true: enable checking of health condition of CSI volumes # false: disable checking of health condition of CSI volumes @@ -130,16 +130,16 @@ spec: # Leave as blank to consider all nodes # Allowed values: map of key-value pairs # Default value: None - # Examples: - # node-role.kubernetes.io/master: "" nodeSelector: - # node-role.kubernetes.io/master: "" + # Uncomment if nodes you wish to use have the node-role.kubernetes. io/control-plane taint + # node-role.kubernetes.io/control-plane: "" # tolerations: Define tolerations for the controllers, if required. # Leave as blank to install controller on worker nodes # Default value: None tolerations: - # - key: "node-role.kubernetes.io/master" + # Uncomment if nodes you wish to use have the node-role.kubernetes.io/control-plane taint + # - key: "node-role.kubernetes.io/control-plane" # operator: "Exists" # effect: "NoSchedule" @@ -158,18 +158,26 @@ spec: # Leave as blank to consider all nodes # Allowed values: map of key-value pairs # Default value: None - # Examples: - # node-role.kubernetes.io/master: "" nodeSelector: - # node-role.kubernetes.io/master: "" + # Uncomment if nodes you wish to use have the node-role.kubernetes.io/control-plane taint + # node-role.kubernetes.io/control-plane: "" - # tolerations: Define tolerations for the controllers, if required. - # Leave as blank to install controller on worker nodes + # tolerations: Define tolerations for the node daemonset, if required. # Default value: None tolerations: - # - key: "node-role.kubernetes.io/master" + # Uncomment if nodes you wish to use have the node-role.kubernetes.io/control-plane taint + # - key: "node-role.kubernetes.io/control-plane" # operator: "Exists" # effect: "NoSchedule" + # - key: "node.kubernetes.io/memory-pressure" + # operator: "Exists" + # effect: "NoExecute" + # - key: "node.kubernetes.io/disk-pressure" + # operator: "Exists" + # effect: "NoExecute" + # - key: "node.kubernetes.io/network-unavailable" + # operator: "Exists" + # effect: "NoExecute" --- apiVersion: v1 @@ -188,8 +196,6 @@ data: ## Dynamic Logging Configuration -This feature is introduced in CSI Driver for unity version 2.0.0. - ### Operator based installation As part of driver installation, a ConfigMap with the name `unity-config-params` is created using the manifest located in the sample file. This ConfigMap contains an attribute `CSI_LOG_LEVEL` which specifies the current log level of the CSI driver. To set the default/initial log level user can set this field during driver installation. @@ -199,12 +205,12 @@ kubectl edit configmap -n unity unity-config-params ``` **Note** : - 1. Prior to CSI Driver for unity version 2.0.0, the log level was allowed to be updated dynamically through `logLevel` attribute in the secret object. + 1. The log level is not allowed to be updated dynamically through `logLevel` attribute in the secret object. 2. "Kubelet config dir path" is not yet configurable in case of Operator based driver installation. 3. Also, snapshotter and resizer sidecars are not optional to choose, it comes default with Driver installation. ## Volume Health Monitoring -This feature is introduced in CSI Driver for unity version 2.1.0. +This feature is introduced in CSI Driver for Unity XT version v2.1.0. ### Operator based installation diff --git a/content/v1/csidriver/installation/test/unity.md b/content/v1/csidriver/installation/test/unity.md index 95998ad511..db32d53c98 100644 --- a/content/v1/csidriver/installation/test/unity.md +++ b/content/v1/csidriver/installation/test/unity.md @@ -1,10 +1,10 @@ --- -title: Test Unity CSI Driver -linktitle: Unity -description: Tests to validate Unity CSI Driver installation +title: Test Unity XT CSI Driver +linktitle: Unity XT +description: Tests to validate Unity XT CSI Driver installation --- -## Test deploying a simple Pod and Pvc with Unity storage +## Test deploying a simple Pod and Pvc with Unity XT storage In the repository, a simple test manifest exists that creates three different PersistentVolumeClaims using default NFS and iSCSI and FC storage classes and automatically mounts them to the pod. **Steps** @@ -30,7 +30,7 @@ You can find all the created resources in `test-unity` namespace. ## Support for SLES 15 SP2 -The CSI Driver for Dell Unity requires the following set of packages installed on all worker nodes that run on SLES 15 SP2. +The CSI Driver for Dell Unity XT requires the following set of packages installed on all worker nodes that run on SLES 15 SP2. - open-iscsi **open-iscsi is required in order to make use of iSCSI protocol for provisioning** - nfs-utils **nfs-utils is required in order to make use of NFS protocol for provisioning** diff --git a/content/v1/csidriver/partners/operator.md b/content/v1/csidriver/partners/operator.md index d60c9e6459..1b4a5fffd2 100644 --- a/content/v1/csidriver/partners/operator.md +++ b/content/v1/csidriver/partners/operator.md @@ -12,7 +12,7 @@ Users can install the Dell CSI Operator via [Operatorhub.io](https://operatorhub ![](../ophub1.png) -2. Click DellEMC Operator. +2. Click Dell Operator. ![](../ophub2.png) diff --git a/content/v1/csidriver/partners/tanzu.md b/content/v1/csidriver/partners/tanzu.md index 393f5b398f..33c7aafeaa 100644 --- a/content/v1/csidriver/partners/tanzu.md +++ b/content/v1/csidriver/partners/tanzu.md @@ -3,7 +3,7 @@ title: "VMware Tanzu" Description: "About VMware Tanzu basic" --- -The CSI Driver for Dell Unity and PowerScale supports VMware Tanzu and deployment of these Tanzu clusters is done using the VMware Tanzu supervisor cluster and supervisor namespace. +The CSI Driver for Dell Unity XT, PowerScale and PowerStore supports VMware Tanzu. The deployment of these Tanzu clusters is done using the VMware Tanzu supervisor cluster and the supervisor namespace. Currently, VMware Tanzu with normal configuration(without NAT) supports Kubernetes 1.20 and higher. The CSI driver can be installed on this cluster using Helm. Installation of CSI drivers in Tanzu via Operator has not been qualified. diff --git a/content/v1/csidriver/release/operator.md b/content/v1/csidriver/release/operator.md index 4451adff9d..9696d83067 100644 --- a/content/v1/csidriver/release/operator.md +++ b/content/v1/csidriver/release/operator.md @@ -3,13 +3,14 @@ title: Operator description: Release notes for Dell CSI Operator --- -## Release Notes - Dell CSI Operator 1.7.0 +## Release Notes - Dell CSI Operator 1.8.0 ->**Note:** There will be a delay in certification of Dell CSI Operator 1.7.0 and it will not be available for download from the Red Hat OpenShift certified catalog right away. The operator will still be available for download from the Red Hat OpenShift Community Catalog soon after the 1.7.0 release. +>**Note:** There will be a delay in certification of Dell CSI Operator 1.8.0 and it will not be available for download from the Red Hat OpenShift certified catalog right away. The operator will still be available for download from the Red Hat OpenShift Community Catalog soon after the 1.8.0 release. ### New Features/Changes -- Added support for Kubernetes 1.23. +- Added support for Kubernetes 1.24. +- Added support for OpenShift 4.10. ### Fixed Issues There are no fixed issues in this release. diff --git a/content/v1/csidriver/release/powerflex.md b/content/v1/csidriver/release/powerflex.md index eabc638190..b77837c82e 100644 --- a/content/v1/csidriver/release/powerflex.md +++ b/content/v1/csidriver/release/powerflex.md @@ -3,21 +3,23 @@ title: PowerFlex description: Release notes for PowerFlex CSI driver --- -## Release Notes - CSI PowerFlex v2.2.0 +## Release Notes - CSI PowerFlex v2.4.0 ### New Features/Changes -- Added support for Kubernetes 1.23. -- Added support for Amazon Elastic Kubernetes Service Anywhere. +- Added InstallationID annotation for volume attributes. +- Added optional parameter protectionDomain to storageclass. +- RHEL 8.6 support added ### Fixed Issues -There are no fixed issues in this release. +- Enhancements to volume group snapshotter. ### Known Issues | Issue | Workaround | |-------|------------| | Delete namespace that has PVCs and pods created with the driver. The External health monitor sidecar crashes as a result of this operation.| Deleting the namespace deletes the PVCs first and then removes the pods in the namespace. This brings a condition where pods exist without their PVCs and causes the external-health-monitor sidecar to crash. This is a known issue and has been reported at https://github.com/kubernetes-csi/external-health-monitor/issues/100| +| When a node goes down, the block volumes attached to the node cannot be attached to another node | This is a known issue and has been reported at https://github.com/kubernetes-csi/external-attacher/issues/215. Workaround:
1. Force delete the pod running on the node that went down
2. Delete the volumeattachment to the node that went down.
Now the volume can be attached to the new node. | ### Note: diff --git a/content/v1/csidriver/release/powermax.md b/content/v1/csidriver/release/powermax.md index 5739dd04ee..20163037c0 100644 --- a/content/v1/csidriver/release/powermax.md +++ b/content/v1/csidriver/release/powermax.md @@ -3,12 +3,19 @@ title: PowerMax description: Release notes for PowerMax CSI driver --- -## Release Notes - CSI PowerMax v2.2.0 +## Release Notes - CSI PowerMax v2.3.0 ### New Features/Changes -- Added support for new access modes in CSI Spec 1.5. -- Added support for Volume Health Monitoring. -- Added support for Kubernetes 1.23. +- Updated deprecated StorageClass parameter fsType with csi.storage.k8s.io/fstype. +- Added support for Standalone Helm Charts. +- Removed beta volumesnapshotclass sample files. +- Added mapping of PV/PVC to namespace. +- Added support to configure fsGroupPolicy. +- Added support to filter topology keys based on user inputs. +- Added support for SRDF Metro group sharing multiple namespaces. +- Added support for Kubernetes 1.24. +- Added support for OpenShift 4.10. +- Added support to convert replicated volume to non-replicated volume and vice versa for Sync and Async modes. ### Fixed Issues There are no fixed issues in this release. @@ -21,8 +28,9 @@ There are no fixed issues in this release. | Getting initiators list fails with context deadline error | The following error can occur during the driver installation if a large number of initiators are present on the array. There is no workaround for this but it can be avoided by deleting stale initiators on the array| | Unable to update Host: A problem occurred modifying the host resource | This issue occurs when the nodes do not have unique hostnames or when an IP address/FQDN with same sub-domains are used as hostnames. The workaround is to use unique hostnames or FQDN with unique sub-domains| | GetSnapVolumeList fails with context deadline error | The following error can occur if a large number of snapshots are present on the array. There is no workaround for this but it can be avoided by deleting unused snapshots on the array| +| When a node goes down, the block volumes attached to the node cannot be attached to another node | This is a known issue and has been reported at https://github.com/kubernetes-csi/external-attacher/issues/215. Workaround:
1. Force delete the pod running on the node that went down
2. Delete the volumeattachment to the node that went down.
Now the volume can be attached to the new node | +| After expanding file system volume , new size is not getting reflected inside the container | This is a known issue and has been reported at https://github.com/dell/csm/issues/378 . Workaround : Remount the volumes
1. Edit the replica count as 0 in application StatefulSet
2. Change the replica count as 1 for same StatefulSet. | ### Note: -- Support for Kubernetes alpha features like Volume Health Monitoring and RWOP (ReadWriteOncePod) access mode introduced in the release will not be available in Openshift environment as Openshift doesn't support enabling of alpha features for Production Grade clusters. -- Expansion of volumes and cloning of volumes are not supported for replicated volumes. +- Support for Kubernetes alpha features like Volume Health Monitoring and RWOP (ReadWriteOncePod) access mode will not be available in Openshift environment as Openshift doesn't support enabling of alpha features for Production Grade clusters. diff --git a/content/v1/csidriver/release/powerscale.md b/content/v1/csidriver/release/powerscale.md index ff2a38a5eb..1a14c62bb6 100644 --- a/content/v1/csidriver/release/powerscale.md +++ b/content/v1/csidriver/release/powerscale.md @@ -3,18 +3,19 @@ title: PowerScale description: Release notes for PowerScale CSI driver --- -## Release Notes - CSI Driver for PowerScale v2.2.0 +## Release Notes - CSI Driver for PowerScale v2.3.0 ### New Features/Changes -- Added support for Replication. -- Added support for Kubernetes 1.23. -- Added support to configure fsGroupPolicy. -- Added support for session based authentication along with basic authentication for PowerScale. +- Removed beta volumesnapshotclass sample files. +- Added support for Kubernetes 1.24. +- Added support to increase volume path limit. +- Added support for OpenShift 4.10. +- Added support for CSM Resiliency sidecar via Helm. ### Fixed Issues -- CSI Driver installation fails with the error message "error getting FQDN". +There are no fixed issues in this release. ### Known Issues | Issue | Resolution or workaround, if known | diff --git a/content/v1/csidriver/release/powerstore.md b/content/v1/csidriver/release/powerstore.md index c624c9c509..f0bbb59e8a 100644 --- a/content/v1/csidriver/release/powerstore.md +++ b/content/v1/csidriver/release/powerstore.md @@ -3,14 +3,16 @@ title: PowerStore description: Release notes for PowerStore CSI driver --- -## Release Notes - CSI PowerStore v2.2.0 +## Release Notes - CSI PowerStore v2.3.0 ### New Features/Changes -- Added support for NVMe/TCP protocol. -- Added support for Kubernetes 1.23. -- Added support to configure fsGroupPolicy. -- Added support for configuring permissions using POSIX mode bits and NFSv4 ACLs on NFS mount directory. +- Support Volume Group Snapshots. +- Removed beta volumesnapshotclass sample files. +- Support Configurable Volume Attributes. +- Added support for Kubernetes 1.24. +- Added support for OpenShift 4.10. +- Added support for NVMe/FC protocol. ### Fixed Issues @@ -22,6 +24,8 @@ There are no fixed issues in this release. |--------------------------------------------------------------------------------------------------------------------------------------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Delete namespace that has PVCs and pods created with the driver. The External health monitor sidecar crashes as a result of this operation | Deleting the namespace deletes the PVCs first and then removes the pods in the namespace. This brings a condition where pods exist without their PVCs and causes the external-health-monitor sidecar to crash. This is a known issue and has been reported at https://github.com/kubernetes-csi/external-health-monitor/issues/100
| | fsGroupPolicy may not work as expected without root privileges for NFS only
https://github.com/kubernetes/examples/issues/260 | To get the desired behavior set "allowRoot: "true" in the storage class parameter | +| If the NVMeFC pod is not getting created and the host looses the ssh connection, causing the driver pods to go to error state | remove the nvme_tcp module from the host incase of NVMeFC connection | +| When a node goes down, the block volumes attached to the node cannot be attached to another node | This is a known issue and has been reported at https://github.com/kubernetes-csi/external-attacher/issues/215. Workaround:
1. Force delete the pod running on the node that went down
2. Delete the volumeattachment to the node that went down.
Now the volume can be attached to the new node. | ### Note: diff --git a/content/v1/csidriver/release/unity.md b/content/v1/csidriver/release/unity.md index 87517e3703..701d0778d4 100644 --- a/content/v1/csidriver/release/unity.md +++ b/content/v1/csidriver/release/unity.md @@ -1,25 +1,27 @@ --- -title: Unity -description: Release notes for Unity CSI driver +title: Unity XT +description: Release notes for Unity XT CSI driver --- -## Release Notes - CSI Unity v2.2.0 +## Release Notes - CSI Unity XT v2.3.0 ### New Features/Changes -- Added support for Kubernetes 1.23. -- Added support for Standalone Helm Charts. +- Removed beta volumesnapshotclass sample files. +- Added support for Kubernetes 1.24. +- Added support for OpenShift 4.10. ### Fixed Issues - +CSM Resiliency: Occasional failure unmounting Unity volume for raw block devices via iSCSI. ### Known Issues | Issue | Workaround | |-------|------------| | Topology-related node labels are not removed automatically. | Currently, when the driver is uninstalled, topology-related node labels are not getting removed automatically. There is an open issue in the Kubernetes to fix this. Until the fix is released, remove the labels manually after the driver un-installation using command **kubectl label node - - ...** Example: **kubectl label node csi-unity.dellemc.com/array123-iscsi-** Note: there must be - at the end of each label to remove it.| -| NFS Clone - Resize of the snapshot is not supported by Unity Platform.| Currently, when the driver takes a clone of NFS volume, it succeeds. But when the user tries to resize the NFS volumesnapshot, the driver will throw an error. The user should never try to resize the cloned NFS volume.| +| NFS Clone - Resize of the snapshot is not supported by Unity XT Platform, however the user should never try to resize the cloned NFS volume.| Currently, when the driver takes a clone of NFS volume, it succeeds but if the user tries to resize the NFS volumesnapshot, the driver will throw an error.| | Delete namespace that has PVCs and pods created with the driver. The External health monitor sidecar crashes as a result of this operation.| Deleting the namespace deletes the PVCs first and then removes the pods in the namespace. This brings a condition where pods exist without their PVCs and causes the external-health-monitor sidecar to crash. This is a known issue and has been reported at https://github.com/kubernetes-csi/external-health-monitor/issues/100| +| When a node goes down, the block volumes attached to the node cannot be attached to another node | This is a known issue and has been reported at https://github.com/kubernetes-csi/external-attacher/issues/215. Workaround:
1. Force delete the pod running on the node that went down
2. Delete the VolumeAttachment to the node that went down.
Now the volume can be attached to the new node. | ### Note: diff --git a/content/v1/csidriver/troubleshooting/powerflex.md b/content/v1/csidriver/troubleshooting/powerflex.md index 5699c2ec98..373605cc8e 100644 --- a/content/v1/csidriver/troubleshooting/powerflex.md +++ b/content/v1/csidriver/troubleshooting/powerflex.md @@ -20,6 +20,8 @@ description: Troubleshooting PowerFlex Driver | The controller pod is stuck and producing errors such as" `Failed to watch *v1.VolumeSnapshotContent: failed to list *v1.VolumeSnapshotContent: the server could not find the requested resource (get volumesnapshotcontents.snapshot.storage.k8s.io)` | Make sure that v1 snapshotter CRDs and v1 snapclass are installed, and not v1beta1, which is no longer supported. | | Driver install or upgrade fails because of an incompatible Kubernetes version, even though the version seems to be within the range of compatibility. For example: `Error: UPGRADE FAILED: chart requires kubeVersion: >= 1.21.0 <= 1.23.0 which is incompatible with Kubernetes V1.21.11-mirantis-1` | If you are using an extended Kubernetes version, please see the helm Chart at `helm/csi-vxflexos/Chart.yaml` and use the alternate `kubeVersion` check that is provided in the comments. *Please note* that this is not meant to be used to enable the use of pre-release alpha and beta versions, which is not supported. | | Volume metrics are missing | Enable [Volume Health Monitoring](../../features/powerflex#volume-health-monitoring) | +| When a node goes down, the block volumes attached to the node cannot be attached to another node | This is a known issue and has been reported at https://github.com/kubernetes-csi/external-attacher/issues/215. Workaround:
1. Force delete the pod running on the node that went down
2. Delete the volumeattachment to the node that went down.
Now the volume can be attached to the new node. | +| CSI-PowerFlex volumes cannot mount; are being recognized as multipath devices | CSI-PowerFlex does not support multipath; to fix:
1. Remove any multipath mapping involving a powerflex volume with `multipath -f `
2. Blacklist CSI-PowerFlex volumes in multipath config file | >*Note*: `vxflexos-controller-*` is the controller pod that acquires leader lease diff --git a/content/v1/csidriver/troubleshooting/powermax.md b/content/v1/csidriver/troubleshooting/powermax.md index e1e7587300..76cc3d4b23 100644 --- a/content/v1/csidriver/troubleshooting/powermax.md +++ b/content/v1/csidriver/troubleshooting/powermax.md @@ -9,3 +9,5 @@ description: Troubleshooting PowerMax Driver | `kubectl describe pod powermax-controller- –n ` indicates that the driver image could not be loaded | You may need to put an insecure-registries entry in `/etc/docker/daemon.json` or log in to the docker registry | | `kubectl logs powermax-controller- –n driver` logs show that the driver cannot authenticate | Check your secret’s username and password | | `kubectl logs powermax-controller- –n driver` logs show that the driver failed to connect to the U4P because it could not verify the certificates | Check the powermax-certs secret and ensure it is not empty or it has the valid certificates| +|Driver install or upgrade fails because of an incompatible Kubernetes version, even though the version seems to be within the range of compatibility. For example: Error: UPGRADE FAILED: chart requires kubeVersion: >= 1.22.0 < 1.25.0 which is incompatible with Kubernetes V1.22.11-mirantis-1 | If you are using an extended Kubernetes version, please see the [helm Chart](https://github.com/dell/csi-powermax/blob/main/helm/csi-powermax/Chart.yaml) and use the alternate kubeVersion check that is provided in the comments. Please note that this is not meant to be used to enable the use of pre-release alpha and beta versions, which is not supported.| +| When a node goes down, the block volumes attached to the node cannot be attached to another node | 1. Force delete the pod running on the node that went down
2. Delete the volumeattachment to the node that went down.
Now the volume can be attached to the new node. | diff --git a/content/v1/csidriver/troubleshooting/powerstore.md b/content/v1/csidriver/troubleshooting/powerstore.md index 2de1b8de02..62c1622262 100644 --- a/content/v1/csidriver/troubleshooting/powerstore.md +++ b/content/v1/csidriver/troubleshooting/powerstore.md @@ -9,3 +9,6 @@ description: Troubleshooting PowerStore Driver | The `kubectl logs -n csi-powerstore powerstore-node-` driver logs show that the driver can't connect to PowerStore API. | Check if you've created a secret with correct credentials | |Installation of the driver on Kubernetes supported versions fails with the following error:
```Error: unable to build kubernetes objects from release manifest: unable to recognize "": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"```|Kubernetes v1.21/v1.22/v1.23 requires v1 version of snapshot CRDs to be created in cluster, see the [Volume Snapshot Requirements](../../installation/helm/powerstore/#optional-volume-snapshot-requirements)| | If PVC is not getting created and getting the following error in PVC description:
```failed to provision volume with StorageClass "powerstore-iscsi": rpc error: code = Internal desc = : Unknown error:```| Check if you've created a secret with correct credentials | +| If the NVMeFC pod is not getting created and the host looses the ssh connection, causing the driver pods to go to error state | remove the nvme_tcp module from the host incase of NVMeFC connection | +| When a node goes down, the block volumes attached to the node cannot be attached to another node | 1. Force delete the pod running on the node that went down
2. Delete the volumeattachment to the node that went down.
Now the volume can be attached to the new node. | +| If the pod creation for NVMe takes time when the connections between the host and the array are more than 2 and considerable volumes are mounted on the host | Reduce the number of connections between the host and the array to 2. | \ No newline at end of file diff --git a/content/v1/csidriver/troubleshooting/unity.md b/content/v1/csidriver/troubleshooting/unity.md index 447b218737..9905215390 100644 --- a/content/v1/csidriver/troubleshooting/unity.md +++ b/content/v1/csidriver/troubleshooting/unity.md @@ -1,16 +1,16 @@ --- -title: Unity -description: Troubleshooting Unity Driver +title: Unity XT +description: Troubleshooting Unity XT Driver --- --- | Symptoms | Prevention, Resolution or Workaround | | --- | --- | | When you run the command `kubectl describe pods unity-controller- –n unity`, the system indicates that the driver image could not be loaded. | You may need to put an insecure-registries entry in `/etc/docker/daemon.json` or login to the docker registry | -| The `kubectl logs -n unity unity-node-` driver logs show that the driver can't connect to Unity - Authentication failure. | Check if you have created a secret with correct credentials | +| The `kubectl logs -n unity unity-node-` driver logs show that the driver can't connect to Unity XT - Authentication failure. | Check if you have created a secret with correct credentials | | `fsGroup` specified in pod spec is not reflected in files or directories at mounted path of volume. | fsType of PVC must be set for fsGroup to work. fsType can be specified while creating a storage class. For NFS protocol, fsType can be specified as `nfs`. fsGroup doesn't work for ephemeral inline volumes. | | Dynamic array detection will not work in Topology based environment | Whenever a new array is added or removed, then the driver controller and node pod should be restarted with command **kubectl get pods -n unity --no-headers=true \| awk '/unity-/{print $1}'\| xargs kubectl delete -n unity pod** when **topology-based storage classes are used**. For dynamic array addition without topology, the driver will detect the newly added or removed arrays automatically| | If source PVC is deleted when cloned PVC exists, then source PVC will be deleted in the cluster but on array, it will still be present and marked for deletion. | All the cloned PVC should be deleted in order to delete the source PVC from the array. | | PVC creation fails on a fresh cluster with **iSCSI** and **NFS** protocols alone enabled with error **failed to provision volume with StorageClass "unity-iscsi": error generating accessibility requirements: no available topology found**. | This is because iSCSI initiator login takes longer than the node pod startup time. This can be overcome by bouncing the node pods in the cluster using the below command the driver pods with **kubectl get pods -n unity --no-headers=true \| awk '/unity-/{print $1}'\| xargs kubectl delete -n unity pod** | -| Driver install or upgrade fails because of an incompatible Kubernetes version, even though the version seems to be within the range of compatibility. For example: `Error: UPGRADE FAILED: chart requires kubeVersion: >= 1.21.0 <= 1.23.0 which is incompatible with Kubernetes V1.21.11-mirantis-1` | If you are using an extended Kubernetes version, please see the helm Chart at `helm/csi-unity/Chart.yaml` and use the alternate `kubeVersion` check that is provided in the comments. *Please note* that this is not meant to be used to enable the use of pre-release alpha and beta versions, which is not supported. | - +| Driver install or upgrade fails because of an incompatible Kubernetes version, even though the version seems to be within the range of compatibility. For example: `Error: UPGRADE FAILED: chart requires kubeVersion: >= 1.21.0 < 1.25.0 which is incompatible with Kubernetes V1.21.11-mirantis-1` | If you are using an extended Kubernetes version, please see the helm Chart at `helm/csi-unity/Chart.yaml` and use the alternate `kubeVersion` check that is provided in the comments. *Please note* that this is not meant to be used to enable the use of pre-release alpha and beta versions, which is not supported. | +| When a node goes down, the block volumes attached to the node cannot be attached to another node | 1. Force delete the pod running on the node that went down
2. Delete the VolumeAttachment to the node that went down.
Now the volume can be attached to the new node. | diff --git a/content/v1/csidriver/upgradation/drivers/isilon.md b/content/v1/csidriver/upgradation/drivers/isilon.md index e473a299e4..75fca2acda 100644 --- a/content/v1/csidriver/upgradation/drivers/isilon.md +++ b/content/v1/csidriver/upgradation/drivers/isilon.md @@ -8,12 +8,12 @@ Description: Upgrade PowerScale CSI driver --- You can upgrade the CSI Driver for Dell PowerScale using Helm or Dell CSI Operator. -## Upgrade Driver from version 2.1.0 to 2.2.0 using Helm +## Upgrade Driver from version 2.2.0 to 2.3.0 using Helm **Note:** While upgrading the driver via helm, controllerCount variable in myvalues.yaml can be at most one less than the number of worker nodes. **Steps** -1. Clone the repository using `git clone -b v2.2.0 https://github.com/dell/csi-powerscale.git`, copy the helm/csi-isilon/values.yaml into a new location with a custom name say _my-isilon-settings.yaml_, to customize settings for installation. Edit _my-isilon-settings.yaml_ as per the requirements. +1. Clone the repository using `git clone -b v2.3.0 https://github.com/dell/csi-powerscale.git`, copy the helm/csi-isilon/values.yaml into a new location with a custom name say _my-isilon-settings.yaml_, to customize settings for installation. Edit _my-isilon-settings.yaml_ as per the requirements. 2. Change to directory dell-csi-helm-installer to install the Dell PowerScale `cd dell-csi-helm-installer` 3. Upgrade the CSI Driver for Dell PowerScale using following command: diff --git a/content/v1/csidriver/upgradation/drivers/operator.md b/content/v1/csidriver/upgradation/drivers/operator.md index d3f9b22a5b..eab8bedd28 100644 --- a/content/v1/csidriver/upgradation/drivers/operator.md +++ b/content/v1/csidriver/upgradation/drivers/operator.md @@ -13,7 +13,7 @@ Dell CSI Operator can be upgraded based on the supported platforms in one of the ### Using Installation Script -1. Clone and checkout the required dell-csi-operator version using `git clone -b v1.7.0 https://github.com/dell/dell-csi-operator.git`. +1. Clone and checkout the required dell-csi-operator version using `git clone -b v1.8.0 https://github.com/dell/dell-csi-operator.git`. 2. cd dell-csi-operator 3. Execute `bash scripts/install.sh --upgrade` . This command will install the latest version of the operator. >Note: Dell CSI Operator version 1.4.0 and higher would install to the 'dell-csi-operator' namespace by default. diff --git a/content/v1/csidriver/upgradation/drivers/powerflex.md b/content/v1/csidriver/upgradation/drivers/powerflex.md index 0611b63233..5c181f183e 100644 --- a/content/v1/csidriver/upgradation/drivers/powerflex.md +++ b/content/v1/csidriver/upgradation/drivers/powerflex.md @@ -10,12 +10,11 @@ Description: Upgrade PowerFlex CSI driver You can upgrade the CSI Driver for Dell PowerFlex using Helm or Dell CSI Operator. -## Update Driver from v2.1 to v2.2 using Helm +## Update Driver from v2.2 to v2.3 using Helm **Steps** -1. Run `git clone -b v2.2.0 https://github.com/dell/csi-powerflex.git` to clone the git repository and get the v2.2.0 driver. +1. Run `git clone -b v2.3.0 https://github.com/dell/csi-powerflex.git` to clone the git repository and get the v2.3.0 driver. 2. You need to create config.yaml with the configuration of your system. Check this section in installation documentation: [Install the Driver](../../../installation/helm/powerflex#install-the-driver) - You must set the only system managed in v1.5/v2.0/v2.1 driver as default in config.json in v2.2 so that the driver knows the existing volumes belong to that system. 3. Update values file as needed. 4. Run the `csi-install` script with the option _\-\-upgrade_ by running: `cd ../dell-csi-helm-installer && ./csi-install.sh --namespace vxflexos --values ./myvalues.yaml --upgrade`. diff --git a/content/v1/csidriver/upgradation/drivers/powermax.md b/content/v1/csidriver/upgradation/drivers/powermax.md index 1f2ba76421..98e1fd3059 100644 --- a/content/v1/csidriver/upgradation/drivers/powermax.md +++ b/content/v1/csidriver/upgradation/drivers/powermax.md @@ -10,10 +10,10 @@ Description: Upgrade PowerMax CSI driver You can upgrade CSI Driver for Dell PowerMax using Helm or Dell CSI Operator. -## Update Driver from v2.1 to v2.2 using Helm +## Update Driver from v2.2 to v2.3 using Helm **Steps** -1. Run `git clone -b v2.2.0 https://github.com/dell/csi-powermax.git` to clone the git repository and get the v2.2 driver. +1. Run `git clone -b v2.3.0 https://github.com/dell/csi-powermax.git` to clone the git repository and get the v2.3 driver. 2. Update the values file as needed. 2. Run the `csi-install` script with the option _\-\-upgrade_ by running: `cd ../dell-csi-helm-installer && ./csi-install.sh --namespace powermax --values ./my-powermax-settings.yaml --upgrade`. diff --git a/content/v1/csidriver/upgradation/drivers/powerstore.md b/content/v1/csidriver/upgradation/drivers/powerstore.md index 7f5152bd3f..089fa38c68 100644 --- a/content/v1/csidriver/upgradation/drivers/powerstore.md +++ b/content/v1/csidriver/upgradation/drivers/powerstore.md @@ -9,12 +9,12 @@ Description: Upgrade PowerStore CSI driver You can upgrade the CSI Driver for Dell PowerStore using Helm or Dell CSI Operator. -## Update Driver from v2.1 to v2.2 using Helm +## Update Driver from v2.2 to v2.3 using Helm Note: While upgrading the driver via helm, controllerCount variable in myvalues.yaml can be at most one less than the number of worker nodes. **Steps** -1. Run `git clone -b v2.2.0 https://github.com/dell/csi-powerstore.git` to clone the git repository and get the driver. +1. Run `git clone -b v2.3.0 https://github.com/dell/csi-powerstore.git` to clone the git repository and get the driver. 2. Edit `helm/config.yaml` file and configure connection information for your PowerStore arrays changing the following parameters: - *endpoint*: defines the full URL path to the PowerStore API. - *globalID*: specifies what storage cluster the driver should use diff --git a/content/v1/csidriver/upgradation/drivers/unity.md b/content/v1/csidriver/upgradation/drivers/unity.md index 23ee1340e1..26b4e4d47d 100644 --- a/content/v1/csidriver/upgradation/drivers/unity.md +++ b/content/v1/csidriver/upgradation/drivers/unity.md @@ -1,13 +1,13 @@ --- -title: "Unity" +title: "Unity XT" tags: - upgrade - csi-driver weight: 1 -Description: Upgrade Unity CSI driver +Description: Upgrade Unity XT CSI driver --- -You can upgrade the CSI Driver for Dell Unity using Helm or Dell CSI Operator. +You can upgrade the CSI Driver for Dell Unity XT using Helm or Dell CSI Operator. **Note:** 1. User has to re-create existing custom-storage classes (if any) according to the latest format. @@ -20,13 +20,12 @@ You can upgrade the CSI Driver for Dell Unity using Helm or Dell CSI Operator. Preparing myvalues.yaml is the same as explained in the install section. -To upgrade the driver from csi-unity v2.1 to csi-unity 2.2 +To upgrade the driver from csi-unity v2.2.0 to csi-unity v2.3.0 -1. Get the latest csi-unity 2.2 code from Github using using `git clone -b v2.2.0 https://github.com/dell/csi-unity.git`. -2. Create myvalues.yaml. -3. Copy the helm/csi-unity/values.yaml to the new location csi-unity/dell-csi-helm-installer with name say myvalues.yaml, to customize settings for installation edit myvalues.yaml as per the requirements. -4. Navigate to common-helm-installer folder and execute the following command: - `./csi-install.sh --namespace unity --values ./myvalues.yaml --upgrade` +1. Get the latest csi-unity v2.3.0 code from Github using using `git clone -b v2.3.0 https://github.com/dell/csi-unity.git`. +2. Copy the helm/csi-unity/values.yaml to the new location csi-unity/dell-csi-helm-installer and rename it to myvalues.yaml. Customize settings for installation by editing myvalues.yaml as needed. +3. Navigate to csi-unity/dell-csi-hem-installer folder and execute this command: + `./csi-install.sh --namespace unity --values ./myvalues.yaml --upgrade` ### Using Operator diff --git a/content/v1/deployment/csmapi.md b/content/v1/deployment/csmapi.md deleted file mode 100644 index 812f36b835..0000000000 --- a/content/v1/deployment/csmapi.md +++ /dev/null @@ -1,8 +0,0 @@ ---- -title: "CSM REST API" -type: swagger -weight: 1 -description: Reference for the CSM REST API ---- - -{{< swaggerui src="../swagger.yaml" >}} \ No newline at end of file diff --git a/content/v1/deployment/csmcli.md b/content/v1/deployment/csmcli.md deleted file mode 100644 index 25ef2d7e43..0000000000 --- a/content/v1/deployment/csmcli.md +++ /dev/null @@ -1,269 +0,0 @@ ---- -title : CSM CLI -linktitle: CSM CLI -weight: 3 -description: > - Dell EMC Container Storage Modules (CSM) Command Line Interface(CLI) Deployment and Management ---- -`csm` is a command-line client for installation of Dell EMC Container Storage Modules and CSI Drivers for Kubernetes clusters. - -## Pre-requisites - -1. [Deploy the Container Storage Modules Installer](../../deployment) -2. Download/Install the `csm` binary from Github: https://github.com/dell/csm. Alternatively, you can build the binary by: - - cloning the `csm` repository - - changing into `csm/cmd/csm` directory - - running `make build` -3. create a `cli_env.sh` file that contains the correct values for the below variables. And export the variables by running `source ./cli_env.sh` - -```console -# Change this to CSM API Server IP -export API_SERVER_IP="127.0.0.1" - -# Change this to CSM API Server Port -export API_SERVER_PORT="31313" - -# CSM API Server protocol - allowed values are https & http -export SCHEME="https" - -# Path to store JWT -export AUTH_CONFIG_PATH="/home/user/installer-token/" -``` - -## Usage - -```console -~$ ./csm -h -csm is command line tool for csm application - -Usage: - csm [flags] - csm [command] - -Available Commands: - add add cluster, configuration or storage - approve-task approve task for application - authenticate authenticate user - change change - subcommand is password - create create application - delete delete storage, cluster, configuration or application - get get storage, cluster, application, configuration, supported driver, module, storage type - help Help about any command - reject-task reject task for an application - update update storage, configuration or cluster - -Flags: - -h, --help help for csm-cli - -Use "csm [command] --help" for more information about a command. -``` - -### Authenticate the User - -To begin with, you need to authenticate the user who will be managing the CSM Installer and its components. - -```console -./csm authenticate --username= --password= -``` -Or more securely, run the above command without `--password` to be prompted for one - -```console -./csm authenticate --username= -Enter user's password: - -``` - -### Change Password - -To change password follow below command - -```console -./csm change password --username= -``` - -### View Supported Platforms - -You can now view the supported Dell emcCSI Drivers - -```console -./csm get supported-drivers -``` - -You can also view the supported Modules - -```console -./csm get supported-modules -``` - -And also view the supported Storage Array Types - -```console -./csm get supported-storage-arrays -``` - -### Add a Cluster - -You can now add a cluster by providing cluster detail name and Kubeconfig path - -```console -./csm add cluster --clustername --configfilepath -``` - -### Upload Configuration Files - -You can now add a configuration file that can be used for creating application by providing filename and path - -```console -./csm add configuration --filename --filepath -``` - -### Add a Storage System - -You can now add storage endpoints, array type and its unique id - -```console -./csm add storage --endpoint --storage-type --unique-id --username -``` - -The optional `--meta-data` flag can be used to provide additional meta-data for the storage system that is used when creating Secrets for the CSI Driver. These fields include: - - isDefault: Set to true if this storage system is used as default for multi-array configuration - - skipCertificateValidation: Set to true to skip certificate validation - - mdmId: Comma separated list of MDM IPs for PowerFlex - - nasName: NAS Name for PowerStore - - blockProtocol: Block Protocol for PowerStore - - port: Port for PowerScale - - portGroups: Comma separated list of port group names for PowerMax - -### Create an Application - -You may now create an application depending on the specific use case. Below are the common use cases: - -
- CSI Driver - -```console -./csm create application --clustername \ - --driver-type powerflex: --name \ - --storage-arrays -``` -
- -
- CSI Driver with CSM Authorization - -CSM Authorization requires a `token.yaml` issued by storage Admin from the CSM Authorization Server, a certificate file, and the of the authorization server. The `token.yaml` and `cert` should be added by following the steps in [adding configuration file](#upload-configuration-files). CSM Authorization does not yet support all CSI Drivers/platforms(See [supported platforms documentation](../../authorization/#supported-platforms) or [supported platforms via CLI](#view-supported-platforms))). -Finally, run the command below: - -```console -./csm create application --clustername \ - --driver-type powerflex: --name \ - --storage-arrays \ - --module-type authorization: \ - --module-configuration "karaviAuthorizationProxy.proxyAuthzToken.filename=,karaviAuthorizationProxy.rootCertificate.filename=,karaviAuthorizationProxy.proxyHost=" - -``` -
- -
- CSM Observability(Standalone) - -CSM Observability depends on driver config secret(s) corresponding to the metric(s) you want to enable. Please see [CSM Observability](../../observability/metrics) for all Supported Metrics. For the sake of demonstration, assuming we want to enable [CSM Metrics for PowerFlex](../../observability/metrics/powerflex), the PowerFlex secret yaml should be added by following the steps in [adding configuration file](#upload-configuration-files). -Once this is done, run the command below: - -```console -./csm create application --clustername \ - --name \ - --module-type observability: \ - --module-configuration "karaviMetricsPowerflex.driverConfig.filename=,karaviMetricsPowerflex.enabled=true" -``` -
- -
- CSM Observability(Standalone) with CSM Authorization - -See the individual steps for configuaration file pre-requisites for CSM Observability (Standalone) with CSM Authorization - -```console -./csm create application --clustername \ - --name \ - --module-type "observability:,authorization:" \ - --module-configuration "karaviMetricsPowerflex.driverConfig.filename=,karaviMetricsPowerflex.enabled=true,karaviAuthorizationProxy.proxyAuthzToken.filename=,karaviAuthorizationProxy.rootCertificate.filename=,karaviAuthorizationProxy.proxyHost=" -``` -
- -
- CSI Driver for Dell EMC PowerMax with reverse proxy module - - To deploy CSI Driver for Dell EMC PowerMax with reverse proxy module, first upload reverse proxy tls crt and tls key via [adding configuration file](#upload-configuration-files). Then, use the below command to create application: - -```console -./csm create application --clustername \ - --driver-type powermax: --name \ - --storage-arrays \ - --module-type reverse-proxy: \ - --module-configuration reverseProxy.tlsSecretKeyFile=,reverseProxy.tlsSecretCertFile= -``` -
- -
- CSI Driver with replication module - - To deploy CSI driver with replication module, first add a target cluster through [adding cluster](#add-a-cluster). Then, use the below command(this command is an example to deploy CSI Driver for Dell EMC PowerStore with replication module) to create application:: - -```console -./csm create application --clustername \ - --driver-type powerstore: --name \ - --storage-arrays \ - --module-configuration target_cluster= \ - --module-type replication: -``` -
- - -
- CSI Driver with other module(s) not covered above - - Assuming you want to deploy a driver with `module A` and `module B`. If they have specific configurations of `A.image="docker:v1"`,`A.filename=hello`, and `B.namespace=world`. - -```console -./csm create application --clustername \ - --driver-type powerflex: --name \ - --storage-arrays \ - --module-type "module A:,module B:" \ - --module-configuration "A.image=docker:v1,A.filename=hello,B.namespace=world" -``` -
-
- -> __Note__: - - `--driver-type` and `--module-type` flags in create application command MUST match the values from the [supported CSM platforms](#view-supported-platforms) - - Replication module supports only using a pair of clusters at a time (source and a target/or single cluster) from CSM installer, However `repctl` can be used if needed to add multiple pairs of target clusters. Using replication module with other modules during application creation is not yet supported. - -### Approve application/task - -You may now approve the task so that you can continue to work with the application - -```console -./csm approve-task --applicationname -``` - -### Reject application/task - -You may want to reject a task or application to discontinue the ongoing process - -```console -./csm reject-task --applicationname -``` - -### Delete application/task - -If you want to delete an application - -```console -./csm delete application --name -``` - -> __Note__: When deleting an application, the namespace and Secrets are not deleted. These resources need to be deleted manually. See more in [Troubleshooting](../troubleshooting#after-deleting-an-application-why-cant-i-re-create-the-same-application). - -> __Note__: All commands and associated syntax can be displayed with -h or --help - diff --git a/content/v1/deployment/csminstaller/_index.md b/content/v1/deployment/csminstaller/_index.md index 95ae36a236..4527ddfd9f 100644 --- a/content/v1/deployment/csminstaller/_index.md +++ b/content/v1/deployment/csminstaller/_index.md @@ -22,7 +22,7 @@ The CSM (Container Storage Modules) Installer simplifies the deployment and mana | Replication | 1.0 | | Resiliency | 1.0 | | CSI Driver for PowerScale | v2.0 | -| CSI Driver for Unity | v2.0 | +| CSI Driver for Unity XT | v2.0 | | CSI Driver for PowerStore | v2.0 | | CSI Driver for PowerFlex | v2.0 | | CSI Driver for PowerMax | v2.0 | diff --git a/content/v1/deployment/csmoperator/_index.md b/content/v1/deployment/csmoperator/_index.md index 702fab7871..c89d7e9d74 100644 --- a/content/v1/deployment/csmoperator/_index.md +++ b/content/v1/deployment/csmoperator/_index.md @@ -16,19 +16,19 @@ Dell CSM Operator has been tested and qualified on Upstream Kubernetes and OpenS | Kubernetes Version | OpenShift Version | | -------------------- | ------------------- | -| 1.21, 1.22, 1.23 | 4.8, 4.9 | +| 1.22, 1.23, 1.24 | 4.9, 4.10, 4.10 EUS | ## Supported CSI Drivers | CSI Driver | Version | ConfigVersion | | ------------------ | --------- | -------------- | -| CSI PowerScale | 2.2.0 | v2.2.0 | +| CSI PowerScale | 2.2.0 + | v2.2.0 + | ## Supported CSM Modules | CSM Modules | Version | ConfigVersion | | ------------------ | --------- | -------------- | -| CSM Authorization | 1.2.0 | v1.2.0 | +| CSM Authorization | 1.2.0 + | v1.2.0 + | ## Installation Dell CSM Operator can be installed manually or via Operator Hub. @@ -82,6 +82,30 @@ To uninstall a CSM operator installed with OLM run `bash scripts/uninstall_olm.s {{< imgproc uninstall_olm.jpg Resize "2500x" >}}{{< /imgproc >}} +### To upgrade Dell CSM Operator, perform the following steps. +Dell CSM Operator can be upgraded in 2 ways: + +1.Using script (for non-OLM based installation) + +2.Using Operator Lifecycle Manager (OLM) + +#### Using Installation Script +1. Clone the [Dell CSM Operator repository](https://github.com/dell/csm-operator). +2. `cd csm-operator` +3. git checkout -b 'csm-operator-version' +4. Execute `bash scripts/install.sh --upgrade` . This command will install the latest version of the operator. + +>Note: Dell CSM Operator would install to the 'dell-csm-operator' namespace by default. + +#### Using OLM +The upgrade of the Dell CSM Operator is done via Operator Lifecycle Manager. + +The `Update approval` (**`InstallPlan`** in OLM terms) strategy plays a role while upgrading dell-csm-operator on OpenShift. This option can be set during installation of dell-csm-operator on OpenShift via the console and can be either set to `Manual` or `Automatic`. +- If the **`Update approval`** is set to `Automatic`, OpenShift automatically detects whenever the latest version of dell-csm-operator is available in the **`Operator hub`**, and upgrades it to the latest available version. +- If the upgrade policy is set to `Manual`, OpenShift notifies of an available upgrade. This notification can be viewed by the user in the **`Installed Operators`** section of the OpenShift console. Clicking on the hyperlink to `Approve` the installation would trigger the dell-csm-operator upgrade process. + +**NOTE**: The recommended version of OLM for Upstream Kubernetes is **`v0.18.3`**. + ### Custom Resource Definitions As part of the Dell CSM Operator installation, a CRD representing configuration for the CSI Driver and CSM Modules is also installed. `containerstoragemodule` CRD is installed in API Group `storage.dell.com`. @@ -124,86 +148,3 @@ The specification for the Custom Resource is the same for all the drivers.Below **nodeSelector** - Used to specify node selectors for the driver StatefulSet/Deployment and DaemonSet. >**Note:** The `image` field should point to the correct image tag for version of the driver you are installing. - -### Pre-requisites for installation of the CSI Drivers - -On Upstream Kubernetes clusters, make sure to install -* VolumeSnapshot CRDs - Install v1 VolumeSnapshot CRDs -* External Volume Snapshot Controller - -#### Volume Snapshot CRD's -The Kubernetes Volume Snapshot CRDs can be obtained and installed from the external-snapshotter project on Github. Manifests are available [here](https://github.com/kubernetes-csi/external-snapshotter/tree/v5.0.1/client/config/crd) - -#### Volume Snapshot Controller -The CSI external-snapshotter sidecar is split into two controllers: -- A common snapshot controller -- A CSI external-snapshotter sidecar - -The common snapshot controller must be installed only once in the cluster irrespective of the number of CSI drivers installed in the cluster. On OpenShift clusters, the common snapshot-controller is pre-installed. In the clusters where it is not present, it can be installed using `kubectl` and the manifests are available [here](https://github.com/kubernetes-csi/external-snapshotter/tree/v5.0.1/deploy/kubernetes/snapshot-controller) - -*NOTE:* -- The manifests available on GitHub install the snapshotter image: - - [quay.io/k8scsi/csi-snapshotter:v5.0.1](https://quay.io/repository/k8scsi/csi-snapshotter?tag=v5.0.1&tab=tags) -- The CSI external-snapshotter sidecar is still installed along with the driver and does not involve any extra configuration. - -#### Installation example - -You can install CRDs and the default snapshot controller by running the following commands: -```bash -git clone https://github.com/kubernetes-csi/external-snapshotter/ -cd ./external-snapshotter -git checkout release- -kubectl create -f client/config/crd -kubectl create -f deploy/kubernetes/snapshot-controller -``` -*NOTE:* -- It is recommended to use 5.0.x version of snapshotter/snapshot-controller. - -## Installing CSI Driver via Operator - -Refer [PowerScale Driver](drivers/powerscale) to install the driver via Operator - ->**Note**: If you are using an OLM based installation, example manifests are available in `OperatorHub` UI. -You can edit these manifests and install the driver using the `OperatorHub` UI. - -### Verifying the driver installation -Once the driver `Custom Resource (CR)` is created, you can verify the installation as mentioned below - -* Check if ContainerStorageModule CR is created successfully using the command below: - ``` - $ kubectl get csm/ -n -o yaml - ``` -* Check the status of the CR to verify if the driver installation is in the `Succeeded` state. If the status is not `Succeeded`, see the [Troubleshooting guide](./troubleshooting/#my-dell-csi-driver-install-failed-how-do-i-fix-it) for more information. - - -### Update CSI Drivers -The CSI Drivers and CSM Modules installed by the Dell CSM Operator can be updated like any Kubernetes resource. This can be achieved in various ways which include: - -* Modifying the installation directly via `kubectl edit` - For e.g. - If the name of the installed PowerScale driver is powerscale, then run - ``` - # Replace driver-namespace with the namespace where the PowerScale driver is installed - $ kubectl edit csm/powerscale -n - ``` - and modify the installation -* Modify the API object in-place via `kubectl patch` - -#### Supported modifications -* Changing environment variable values for driver -* Updating the image of the driver - -### Uninstall CSI Driver -The CSI Drivers and CSM Modules can be uninstalled by deleting the Custom Resource. - -For e.g. -``` -$ kubectl delete csm/powerscale -n -``` - -By default, the `forceRemoveDriver` option is set to `true` which will uninstall the CSI Driver and CSM Modules when the Custom Resource is deleted. Setting this option to `false` is not recommended. - -### SideCars -Although the sidecars field in the driver specification is optional, it is **strongly** recommended to not modify any details related to sidecars provided (if present) in the sample manifests. The only exception to this is modifications requested by the documentation, for example, filling in blank IPs or other such system-specific data. Any modifications not specifically requested by the documentation should be only done after consulting with Dell support. - -## Modules -The CSM Operator can optionally enable modules that are supported by the specific Dell CSI driver. By default, the modules are disabled but they can be enabled by setting the `enabled` flag to true and setting any other configuration options for the given module. diff --git a/content/v1/deployment/csmoperator/drivers/_index.md b/content/v1/deployment/csmoperator/drivers/_index.md index c850691c0d..18129d5071 100644 --- a/content/v1/deployment/csmoperator/drivers/_index.md +++ b/content/v1/deployment/csmoperator/drivers/_index.md @@ -4,3 +4,92 @@ linkTitle: "CSI Drivers" description: Installation of Dell CSI Drivers using Dell CSM Operator weight: 1 --- + +## Pre-requisites for installation of the CSI Drivers + +On Upstream Kubernetes clusters, ensure that to install +* VolumeSnapshot CRDs - Install v1 VolumeSnapshot CRDs +* External Volume Snapshot Controller + +### Volume Snapshot CRD's +The Kubernetes Volume Snapshot CRDs can be obtained and installed from the external-snapshotter project on Github. Manifests are available [here](https://github.com/kubernetes-csi/external-snapshotter/tree/v5.0.1/client/config/crd) + +### Volume Snapshot Controller +The CSI external-snapshotter sidecar is split into two controllers: +- A common snapshot controller +- A CSI external-snapshotter sidecar + +The common snapshot controller must be installed only once in the cluster irrespective of the number of CSI drivers installed in the cluster. On OpenShift clusters, the common snapshot-controller is pre-installed. In the clusters where it is not present, it can be installed using `kubectl` and the manifests are available [here](https://github.com/kubernetes-csi/external-snapshotter/tree/v5.0.1/deploy/kubernetes/snapshot-controller) + +*NOTE:* +- The manifests available on GitHub install the snapshotter image: + - [quay.io/k8scsi/csi-snapshotter:v5.0.1](https://quay.io/repository/k8scsi/csi-snapshotter?tag=v5.0.1&tab=tags) +- The CSI external-snapshotter sidecar is still installed along with the driver and does not involve any extra configuration. + +### Installation example + +You can install CRDs and the default snapshot controller by running the following commands: +```bash +git clone https://github.com/kubernetes-csi/external-snapshotter/ +cd ./external-snapshotter +git checkout release- +kubectl create -f client/config/crd +kubectl create -f deploy/kubernetes/snapshot-controller +``` +*NOTE:* +- It is recommended to use 5.0.x version of snapshotter/snapshot-controller. + +## Installing CSI Driver via Operator + +Refer [PowerScale Driver](../drivers/powerscale) to install the driver via Operator + +>**Note**: If you are using an OLM based installation, example manifests are available in `OperatorHub` UI. +You can edit these manifests and install the driver using the `OperatorHub` UI. + +### Verifying the driver installation +Once the driver `Custom Resource (CR)` is created, you can verify the installation as mentioned below + +* Check if ContainerStorageModule CR is created successfully using the command below: + ``` + $ kubectl get csm/ -n -o yaml + ``` +* Check the status of the CR to verify if the driver installation is in the `Succeeded` state. If the status is not `Succeeded`, see the [Troubleshooting guide](../troubleshooting/#my-dell-csi-driver-install-failed-how-do-i-fix-it) for more information. + + +### Update CSI Drivers +The CSI Drivers and CSM Modules installed by the Dell CSM Operator can be updated like any Kubernetes resource. This can be achieved in various ways which include: + +* Modifying the installation directly via `kubectl edit` + For example - If the name of the installed PowerScale driver is powerscale, then run + ``` + # Replace driver-namespace with the namespace where the PowerScale driver is installed + $ kubectl edit csm/powerscale -n + ``` + and modify the installation +* Modify the API object in-place via `kubectl patch` + +#### Supported modifications +* Changing environment variable values for driver +* Updating the image of the driver +* Upgrading the driver version + +**NOTES:** +1. If you are trying to upgrade the CSI driver from an older version, make sure to modify the _configVersion_ field if required. + ```yaml + driver: + configVersion: v2.3.0 + ``` +2. Do not try to update the operator by modifying the original `CustomResource` manifest file and running the `kubectl apply -f` command. As part of the driver installation, the Operator sets some annotations on the `CustomResource` object which are further utilized in some workflows (like detecting upgrade of drivers). If you run the `kubectl apply -f` command to update the driver, these annotations are overwritten and this may lead to failures. + +### Uninstall CSI Driver +The CSI Drivers and CSM Modules can be uninstalled by deleting the Custom Resource. + +For e.g. +``` +$ kubectl delete csm/powerscale -n +``` + +By default, the `forceRemoveDriver` option is set to `true` which will uninstall the CSI Driver and CSM Modules when the Custom Resource is deleted. Setting this option to `false` is not recommended. + +### SideCars +Although the sidecars field in the driver specification is optional, it is **strongly** recommended to not modify any details related to sidecars provided (if present) in the sample manifests. The only exception to this is modifications requested by the documentation, for example, filling in blank IPs or other such system-specific data. Any modifications not specifically requested by the documentation should be only done after consulting with Dell support. diff --git a/content/v1/deployment/csmoperator/drivers/powerscale.md b/content/v1/deployment/csmoperator/drivers/powerscale.md index 4471f1d1e6..261e0c1222 100644 --- a/content/v1/deployment/csmoperator/drivers/powerscale.md +++ b/content/v1/deployment/csmoperator/drivers/powerscale.md @@ -137,7 +137,7 @@ User can query for all Dell CSI drivers using the following command: ```kubectl create -f ``` . This command will deploy the CSI-PowerScale driver in the namespace specified in the input YAML file. -5. [Verify the CSI Driver installation](../../#verifying-the-driver-installation) +7. [Verify the CSI Driver installation](../drivers/_index.md#verifying-the-driver-installation) **Note** : 1. "Kubelet config dir path" is not yet configurable in case of Operator based driver installation. diff --git a/content/v1/deployment/csmoperator/modules/_index.md b/content/v1/deployment/csmoperator/modules/_index.md index 4a76e7d868..1ac79f9d15 100644 --- a/content/v1/deployment/csmoperator/modules/_index.md +++ b/content/v1/deployment/csmoperator/modules/_index.md @@ -10,4 +10,4 @@ The steps include: 1. Deploy the Dell CSM Operator (if it is not already deployed). Please follow the instructions available [here](../../#installation). 2. Configure any pre-requisite for the desired module(s). See the specific module below for more information -3. Follow the instructions available [here](../drivers/powerscale.md/#install-driver)) to install the Dell CSI Driver via the CSM Operator. The module section in the ContainerStorageModule CR should be updated to enable the desired module(s). There are [sample manifests](https://github.com/dell/csm-operator/tree/main/samples) provided which can be edited to do an easy installation of the driver along with the module. \ No newline at end of file +3. Follow the instructions available [here](../drivers/powerscale.md/#install-driver)) to install the Dell CSI Driver via the CSM Operator. The module section in the ContainerStorageModule CR should be updated to enable the desired module(s). There are [sample manifests](https://github.com/dell/csm-operator/tree/main/samples) provided which can be edited to do an easy installation of the driver along with the module. diff --git a/content/v1/deployment/swagger.yaml b/content/v1/deployment/swagger.yaml deleted file mode 100644 index 15a9b8b227..0000000000 --- a/content/v1/deployment/swagger.yaml +++ /dev/null @@ -1,1395 +0,0 @@ -basePath: /api/v1 -definitions: - ApplicationCreateRequest: - properties: - cluster_id: - type: string - driver_configuration: - items: - type: string - type: array - driver_type_id: - type: string - module_configuration: - items: - type: string - type: array - module_types: - items: - type: string - type: array - name: - type: string - storage_arrays: - items: - type: string - type: array - required: - - cluster_id - - driver_type_id - - name - type: object - ApplicationResponse: - properties: - application_output: - type: string - cluster_id: - type: string - driver_configuration: - items: - type: string - type: array - driver_type_id: - type: string - id: - type: string - module_configuration: - items: - type: string - type: array - module_types: - items: - type: string - type: array - name: - type: string - storage_arrays: - items: - type: string - type: array - type: object - ClusterResponse: - properties: - cluster_id: - type: string - cluster_name: - type: string - nodes: - description: The nodes - type: string - type: object - ConfigFileResponse: - properties: - id: - type: string - name: - type: string - type: object - DriverResponse: - properties: - id: - type: string - storage_array_type_id: - type: string - version: - type: string - type: object - ErrorMessage: - properties: - arguments: - items: - type: string - type: array - code: - description: HTTPStatusEnum Possible HTTP status values of completed or failed - jobs - enum: - - 200 - - 201 - - 202 - - 204 - - 400 - - 401 - - 403 - - 404 - - 422 - - 429 - - 500 - - 503 - type: integer - message: - description: Message string. - type: string - message_l10n: - description: Localized message - type: object - severity: - description: |- - SeverityEnum - The severity of the condition - * INFO - Information that may be of use in understanding the failure. It is not a problem to fix. - * WARNING - A condition that isn't a failure, but may be unexpected or a contributing factor. It may be necessary to fix the condition to successfully retry the request. - * ERROR - An actual failure condition through which the request could not continue. - * CRITICAL - A failure with significant impact to the system. Normally failed commands roll back and are just ERROR, but this is possible - enum: - - INFO - - WARNING - - ERROR - - CRITICAL - type: string - type: object - ErrorResponse: - properties: - http_status_code: - description: HTTPStatusEnum Possible HTTP status values of completed or failed - jobs - enum: - - 200 - - 201 - - 202 - - 204 - - 400 - - 401 - - 403 - - 404 - - 422 - - 429 - - 500 - - 503 - type: integer - messages: - description: |- - A list of messages describing the failure encountered by this request. At least one will - be of Error severity because Info and Warning conditions do not cause the request to fail - items: - $ref: '#/definitions/ErrorMessage' - type: array - type: object - ModuleResponse: - properties: - id: - type: string - name: - type: string - standalone: - type: boolean - version: - type: string - type: object - StorageArrayCreateRequest: - properties: - management_endpoint: - type: string - meta_data: - items: - type: string - type: array - password: - type: string - storage_array_type: - type: string - unique_id: - type: string - username: - type: string - required: - - management_endpoint - - password - - storage_array_type - - unique_id - - username - type: object - StorageArrayResponse: - properties: - id: - type: string - management_endpoint: - type: string - meta_data: - items: - type: string - type: array - storage_array_type_id: - type: string - unique_id: - type: string - username: - type: string - type: object - StorageArrayTypeResponse: - properties: - id: - type: string - name: - type: string - type: object - StorageArrayUpdateRequest: - properties: - management_endpoint: - type: string - meta_data: - items: - type: string - type: array - password: - type: string - storage_array_type: - type: string - unique_id: - type: string - username: - type: string - type: object - TaskResponse: - properties: - _links: - additionalProperties: - additionalProperties: - type: string - type: object - type: object - application_name: - type: string - id: - type: string - logs: - type: string - status: - type: string - type: object -info: - contact: {} - description: CSM Deployment API - title: CSM Deployment API - version: "1.0" -paths: - /applications: - get: - consumes: - - application/json - description: List all applications - operationId: list-applications - parameters: - - description: Application Name - in: query - name: name - type: string - produces: - - application/json - responses: - "200": - description: OK - schema: - items: - $ref: '#/definitions/ApplicationResponse' - type: array - "400": - description: Bad Request - schema: - $ref: '#/definitions/ErrorResponse' - "404": - description: Not Found - schema: - $ref: '#/definitions/ErrorResponse' - "500": - description: Internal Server Error - schema: - $ref: '#/definitions/ErrorResponse' - security: - - ApiKeyAuth: [] - summary: List all applications - tags: - - application - post: - consumes: - - application/json - description: Create a new application - operationId: create-application - parameters: - - description: Application info for creation - in: body - name: application - required: true - schema: - $ref: '#/definitions/ApplicationCreateRequest' - produces: - - application/json - responses: - "202": - description: Accepted - schema: - type: string - "400": - description: Bad Request - schema: - $ref: '#/definitions/ErrorResponse' - "404": - description: Not Found - schema: - $ref: '#/definitions/ErrorResponse' - "500": - description: Internal Server Error - schema: - $ref: '#/definitions/ErrorResponse' - security: - - ApiKeyAuth: [] - summary: Create a new application - tags: - - application - /applications/{id}: - delete: - consumes: - - application/json - description: Delete an application - operationId: delete-application - parameters: - - description: Application ID - in: path - name: id - required: true - type: string - produces: - - application/json - responses: - "204": - description: "" - "400": - description: Bad Request - schema: - $ref: '#/definitions/ErrorResponse' - "404": - description: Not Found - schema: - $ref: '#/definitions/ErrorResponse' - "500": - description: Internal Server Error - schema: - $ref: '#/definitions/ErrorResponse' - security: - - ApiKeyAuth: [] - summary: Delete an application - tags: - - application - get: - consumes: - - application/json - description: Get an application - operationId: get-application - parameters: - - description: Application ID - in: path - name: id - required: true - type: string - produces: - - application/json - responses: - "200": - description: OK - schema: - $ref: '#/definitions/ApplicationResponse' - "400": - description: Bad Request - schema: - $ref: '#/definitions/ErrorResponse' - "404": - description: Not Found - schema: - $ref: '#/definitions/ErrorResponse' - "500": - description: Internal Server Error - schema: - $ref: '#/definitions/ErrorResponse' - security: - - ApiKeyAuth: [] - summary: Get an application - tags: - - application - /clusters: - get: - consumes: - - application/json - description: List all clusters - operationId: list-clusters - parameters: - - description: Cluster Name - in: query - name: cluster_name - type: string - produces: - - application/json - responses: - "200": - description: OK - schema: - items: - $ref: '#/definitions/ClusterResponse' - type: array - "400": - description: Bad Request - schema: - $ref: '#/definitions/ErrorResponse' - "404": - description: Not Found - schema: - $ref: '#/definitions/ErrorResponse' - "500": - description: Internal Server Error - schema: - $ref: '#/definitions/ErrorResponse' - security: - - ApiKeyAuth: [] - summary: List all clusters - tags: - - cluster - post: - consumes: - - application/json - description: Create a new cluster - operationId: create-cluster - parameters: - - description: Name of the cluster - in: formData - name: name - required: true - type: string - - description: kube config file - in: formData - name: file - required: true - type: file - produces: - - application/json - responses: - "201": - description: Created - schema: - $ref: '#/definitions/ClusterResponse' - "400": - description: Bad Request - schema: - $ref: '#/definitions/ErrorResponse' - "404": - description: Not Found - schema: - $ref: '#/definitions/ErrorResponse' - "500": - description: Internal Server Error - schema: - $ref: '#/definitions/ErrorResponse' - security: - - ApiKeyAuth: [] - summary: Create a new cluster - tags: - - cluster - /clusters/{id}: - delete: - consumes: - - application/json - description: Delete a cluster - operationId: delete-cluster - parameters: - - description: Cluster ID - in: path - name: id - required: true - type: string - produces: - - application/json - responses: - "204": - description: "" - "400": - description: Bad Request - schema: - $ref: '#/definitions/ErrorResponse' - "404": - description: Not Found - schema: - $ref: '#/definitions/ErrorResponse' - "500": - description: Internal Server Error - schema: - $ref: '#/definitions/ErrorResponse' - security: - - ApiKeyAuth: [] - summary: Delete a cluster - tags: - - cluster - get: - consumes: - - application/json - description: Get a cluster - operationId: get-cluster - parameters: - - description: Cluster ID - in: path - name: id - required: true - type: string - produces: - - application/json - responses: - "200": - description: OK - schema: - $ref: '#/definitions/ClusterResponse' - "400": - description: Bad Request - schema: - $ref: '#/definitions/ErrorResponse' - "404": - description: Not Found - schema: - $ref: '#/definitions/ErrorResponse' - "500": - description: Internal Server Error - schema: - $ref: '#/definitions/ErrorResponse' - security: - - ApiKeyAuth: [] - summary: Get a cluster - tags: - - cluster - patch: - consumes: - - application/json - description: Update a cluster - operationId: update-cluster - parameters: - - description: Cluster ID - in: path - name: id - required: true - type: string - - description: Name of the cluster - in: formData - name: name - type: string - - description: kube config file - in: formData - name: file - type: file - produces: - - application/json - responses: - "200": - description: OK - schema: - $ref: '#/definitions/ClusterResponse' - "400": - description: Bad Request - schema: - $ref: '#/definitions/ErrorResponse' - "404": - description: Not Found - schema: - $ref: '#/definitions/ErrorResponse' - "500": - description: Internal Server Error - schema: - $ref: '#/definitions/ErrorResponse' - security: - - ApiKeyAuth: [] - summary: Update a cluster - tags: - - cluster - /configuration-files: - get: - consumes: - - application/json - description: List all configuration files - operationId: list-config-file - parameters: - - description: Name of the configuration file - in: query - name: config_name - type: string - produces: - - application/json - responses: - "200": - description: OK - schema: - items: - $ref: '#/definitions/ConfigFileResponse' - type: array - "400": - description: Bad Request - schema: - $ref: '#/definitions/ErrorResponse' - "404": - description: Not Found - schema: - $ref: '#/definitions/ErrorResponse' - "500": - description: Internal Server Error - schema: - $ref: '#/definitions/ErrorResponse' - security: - - ApiKeyAuth: [] - summary: List all configuration files - tags: - - configuration-file - post: - consumes: - - application/json - description: Create a new configuration file - operationId: create-config-file - parameters: - - description: Name of the configuration file - in: formData - name: name - required: true - type: string - - description: Configuration file - in: formData - name: file - required: true - type: file - produces: - - application/json - responses: - "201": - description: Created - schema: - $ref: '#/definitions/ConfigFileResponse' - "400": - description: Bad Request - schema: - $ref: '#/definitions/ErrorResponse' - "404": - description: Not Found - schema: - $ref: '#/definitions/ErrorResponse' - "500": - description: Internal Server Error - schema: - $ref: '#/definitions/ErrorResponse' - security: - - ApiKeyAuth: [] - summary: Create a new configuration file - tags: - - configuration-file - /configuration-files/{id}: - delete: - consumes: - - application/json - description: Delete a configuration file - operationId: delete-config-file - parameters: - - description: Configuration file ID - in: path - name: id - required: true - type: string - produces: - - application/json - responses: - "204": - description: "" - "400": - description: Bad Request - schema: - $ref: '#/definitions/ErrorResponse' - "404": - description: Not Found - schema: - $ref: '#/definitions/ErrorResponse' - "500": - description: Internal Server Error - schema: - $ref: '#/definitions/ErrorResponse' - security: - - ApiKeyAuth: [] - summary: Delete a configuration file - tags: - - configuration-file - get: - consumes: - - application/json - description: Get a configuration file - operationId: get-config-file - parameters: - - description: Configuration file ID - in: path - name: id - required: true - type: string - produces: - - application/json - responses: - "200": - description: OK - schema: - $ref: '#/definitions/ConfigFileResponse' - "400": - description: Bad Request - schema: - $ref: '#/definitions/ErrorResponse' - "404": - description: Not Found - schema: - $ref: '#/definitions/ErrorResponse' - "500": - description: Internal Server Error - schema: - $ref: '#/definitions/ErrorResponse' - security: - - ApiKeyAuth: [] - summary: Get a configuration file - tags: - - configuration-file - patch: - consumes: - - application/json - description: Update a configuration file - operationId: update-config-file - parameters: - - description: Configuration file ID - in: path - name: id - required: true - type: string - - description: Name of the configuration file - in: formData - name: name - required: true - type: string - - description: Configuration file - in: formData - name: file - required: true - type: file - produces: - - application/json - responses: - "204": - description: No Content - "400": - description: Bad Request - schema: - $ref: '#/definitions/ErrorResponse' - "404": - description: Not Found - schema: - $ref: '#/definitions/ErrorResponse' - "500": - description: Internal Server Error - schema: - $ref: '#/definitions/ErrorResponse' - security: - - ApiKeyAuth: [] - summary: Update a configuration file - tags: - - configuration-file - /driver-types: - get: - consumes: - - application/json - description: List all driver types - operationId: list-driver-types - produces: - - application/json - responses: - "200": - description: OK - schema: - items: - $ref: '#/definitions/DriverResponse' - type: array - "400": - description: Bad Request - schema: - $ref: '#/definitions/ErrorResponse' - "404": - description: Not Found - schema: - $ref: '#/definitions/ErrorResponse' - "500": - description: Internal Server Error - schema: - $ref: '#/definitions/ErrorResponse' - security: - - ApiKeyAuth: [] - summary: List all driver types - tags: - - driver-type - /driver-types/{id}: - get: - consumes: - - application/json - description: Get a driver type - operationId: get-driver-type - parameters: - - description: Driver Type ID - in: path - name: id - required: true - type: string - produces: - - application/json - responses: - "200": - description: OK - schema: - $ref: '#/definitions/DriverResponse' - "400": - description: Bad Request - schema: - $ref: '#/definitions/ErrorResponse' - "404": - description: Not Found - schema: - $ref: '#/definitions/ErrorResponse' - "500": - description: Internal Server Error - schema: - $ref: '#/definitions/ErrorResponse' - security: - - ApiKeyAuth: [] - summary: Get a driver type - tags: - - driver-type - /module-types: - get: - consumes: - - application/json - description: List all module types - operationId: list-module-type - produces: - - application/json - responses: - "200": - description: OK - schema: - items: - $ref: '#/definitions/ModuleResponse' - type: array - "400": - description: Bad Request - schema: - $ref: '#/definitions/ErrorResponse' - "404": - description: Not Found - schema: - $ref: '#/definitions/ErrorResponse' - "500": - description: Internal Server Error - schema: - $ref: '#/definitions/ErrorResponse' - security: - - ApiKeyAuth: [] - summary: List all module types - tags: - - module-type - /module-types/{id}: - get: - consumes: - - application/json - description: Get a module type - operationId: get-module-type - parameters: - - description: Module Type ID - in: path - name: id - required: true - type: string - produces: - - application/json - responses: - "200": - description: OK - schema: - $ref: '#/definitions/ModuleResponse' - "400": - description: Bad Request - schema: - $ref: '#/definitions/ErrorResponse' - "404": - description: Not Found - schema: - $ref: '#/definitions/ErrorResponse' - "500": - description: Internal Server Error - schema: - $ref: '#/definitions/ErrorResponse' - security: - - ApiKeyAuth: [] - summary: Get a module type - tags: - - module-type - /storage-array-types: - get: - consumes: - - application/json - description: List all storage array types - operationId: list-storage-array-type - produces: - - application/json - responses: - "200": - description: OK - schema: - items: - $ref: '#/definitions/StorageArrayTypeResponse' - type: array - "400": - description: Bad Request - schema: - $ref: '#/definitions/ErrorResponse' - "404": - description: Not Found - schema: - $ref: '#/definitions/ErrorResponse' - "500": - description: Internal Server Error - schema: - $ref: '#/definitions/ErrorResponse' - security: - - ApiKeyAuth: [] - summary: List all storage array types - tags: - - storage-array-type - /storage-array-types/{id}: - get: - consumes: - - application/json - description: Get a storage array type - operationId: get-storage-array-type - parameters: - - description: Storage Array Type ID - in: path - name: id - required: true - type: string - produces: - - application/json - responses: - "200": - description: OK - schema: - $ref: '#/definitions/StorageArrayTypeResponse' - "400": - description: Bad Request - schema: - $ref: '#/definitions/ErrorResponse' - "404": - description: Not Found - schema: - $ref: '#/definitions/ErrorResponse' - "500": - description: Internal Server Error - schema: - $ref: '#/definitions/ErrorResponse' - security: - - ApiKeyAuth: [] - summary: Get a storage array type - tags: - - storage-array-type - /storage-arrays: - get: - consumes: - - application/json - description: List all storage arrays - operationId: list-storage-arrays - parameters: - - description: Unique ID - in: query - name: unique_id - type: string - - description: Storage Type - in: query - name: storage_type - type: string - produces: - - application/json - responses: - "202": - description: Accepted - schema: - items: - $ref: '#/definitions/StorageArrayResponse' - type: array - "400": - description: Bad Request - schema: - $ref: '#/definitions/ErrorResponse' - "404": - description: Not Found - schema: - $ref: '#/definitions/ErrorResponse' - "500": - description: Internal Server Error - schema: - $ref: '#/definitions/ErrorResponse' - security: - - ApiKeyAuth: [] - summary: List all storage arrays - tags: - - storage-array - post: - consumes: - - application/json - description: Create a new storage array - operationId: create-storage-array - parameters: - - description: Storage Array info for creation - in: body - name: storageArray - required: true - schema: - $ref: '#/definitions/StorageArrayCreateRequest' - produces: - - application/json - responses: - "201": - description: Created - schema: - $ref: '#/definitions/StorageArrayResponse' - "400": - description: Bad Request - schema: - $ref: '#/definitions/ErrorResponse' - "404": - description: Not Found - schema: - $ref: '#/definitions/ErrorResponse' - "500": - description: Internal Server Error - schema: - $ref: '#/definitions/ErrorResponse' - security: - - ApiKeyAuth: [] - summary: Create a new storage array - tags: - - storage-array - /storage-arrays/{id}: - delete: - consumes: - - application/json - description: Delete storage array - operationId: delete-storage-array - parameters: - - description: Storage Array ID - in: path - name: id - required: true - type: string - produces: - - application/json - responses: - "200": - description: Success - schema: - type: string - "400": - description: Bad Request - schema: - $ref: '#/definitions/ErrorResponse' - "404": - description: Not Found - schema: - $ref: '#/definitions/ErrorResponse' - "500": - description: Internal Server Error - schema: - $ref: '#/definitions/ErrorResponse' - security: - - ApiKeyAuth: [] - summary: Delete storage array - tags: - - storage-array - get: - consumes: - - application/json - description: Get storage array - operationId: get-storage-array - parameters: - - description: Storage Array ID - in: path - name: id - required: true - type: string - produces: - - application/json - responses: - "200": - description: OK - schema: - $ref: '#/definitions/StorageArrayResponse' - "400": - description: Bad Request - schema: - $ref: '#/definitions/ErrorResponse' - "404": - description: Not Found - schema: - $ref: '#/definitions/ErrorResponse' - "500": - description: Internal Server Error - schema: - $ref: '#/definitions/ErrorResponse' - security: - - ApiKeyAuth: [] - summary: Get storage array - tags: - - storage-array - patch: - consumes: - - application/json - description: Update a storage array - operationId: update-storage-array - parameters: - - description: Storage Array ID - in: path - name: id - required: true - type: string - - description: Storage Array info for update - in: body - name: storageArray - required: true - schema: - $ref: '#/definitions/StorageArrayUpdateRequest' - produces: - - application/json - responses: - "204": - description: No Content - "400": - description: Bad Request - schema: - $ref: '#/definitions/ErrorResponse' - "404": - description: Not Found - schema: - $ref: '#/definitions/ErrorResponse' - "500": - description: Internal Server Error - schema: - $ref: '#/definitions/ErrorResponse' - security: - - ApiKeyAuth: [] - summary: Update a storage array - tags: - - storage-array - /tasks: - get: - consumes: - - application/json - description: List all tasks - operationId: list-tasks - parameters: - - description: Application Name - in: query - name: application_name - type: string - produces: - - application/json - responses: - "200": - description: OK - schema: - items: - $ref: '#/definitions/TaskResponse' - type: array - "400": - description: Bad Request - schema: - $ref: '#/definitions/ErrorResponse' - "404": - description: Not Found - schema: - $ref: '#/definitions/ErrorResponse' - "500": - description: Internal Server Error - schema: - $ref: '#/definitions/ErrorResponse' - security: - - ApiKeyAuth: [] - summary: List all tasks - tags: - - task - /tasks/{id}: - get: - consumes: - - application/json - description: Get a task - operationId: get-task - parameters: - - description: Task ID - in: path - name: id - required: true - type: string - produces: - - application/json - responses: - "200": - description: OK - schema: - $ref: '#/definitions/TaskResponse' - "303": - description: See Other - schema: - $ref: '#/definitions/TaskResponse' - "400": - description: Bad Request - schema: - $ref: '#/definitions/ErrorResponse' - "404": - description: Not Found - schema: - $ref: '#/definitions/ErrorResponse' - "500": - description: Internal Server Error - schema: - $ref: '#/definitions/ErrorResponse' - security: - - ApiKeyAuth: [] - summary: Get a task - tags: - - task - /tasks/{id}/approve: - post: - consumes: - - application/json - description: Approve state change for an application - operationId: approve-state-change-application - parameters: - - description: Task ID - in: path - name: id - required: true - type: string - - description: Task is associated with an Application update operation - in: query - name: updating - type: boolean - produces: - - application/json - responses: - "202": - description: Accepted - schema: - type: string - "400": - description: Bad Request - schema: - $ref: '#/definitions/ErrorResponse' - "404": - description: Not Found - schema: - $ref: '#/definitions/ErrorResponse' - "500": - description: Internal Server Error - schema: - $ref: '#/definitions/ErrorResponse' - security: - - ApiKeyAuth: [] - summary: Approve state change for an application - tags: - - task - /tasks/{id}/cancel: - post: - consumes: - - application/json - description: Cancel state change for an application - operationId: cancel-state-change-application - parameters: - - description: Task ID - in: path - name: id - required: true - type: string - - description: Task is associated with an Application update operation - in: query - name: updating - type: boolean - produces: - - application/json - responses: - "200": - description: Success - schema: - type: string - "400": - description: Bad Request - schema: - $ref: '#/definitions/ErrorResponse' - "404": - description: Not Found - schema: - $ref: '#/definitions/ErrorResponse' - "500": - description: Internal Server Error - schema: - $ref: '#/definitions/ErrorResponse' - security: - - ApiKeyAuth: [] - summary: Cancel state change for an application - tags: - - task - /users/change-password: - patch: - consumes: - - application/json - description: Change password for existing user - operationId: change-password - parameters: - - description: Enter New Password - format: password - in: query - name: password - required: true - type: string - produces: - - application/json - responses: - "204": - description: No Content - "401": - description: Unauthorized - schema: - $ref: '#/definitions/ErrorResponse' - "403": - description: Forbidden - schema: - $ref: '#/definitions/ErrorResponse' - "500": - description: Internal Server Error - schema: - $ref: '#/definitions/ErrorResponse' - security: - - BasicAuth: [] - summary: Change password for existing user - tags: - - user - /users/login: - post: - consumes: - - application/json - description: Login for existing user - operationId: login - produces: - - application/json - responses: - "200": - description: Bearer Token for Logged in User - schema: - type: string - "401": - description: Unauthorized - schema: - $ref: '#/definitions/ErrorResponse' - "403": - description: Forbidden - schema: - $ref: '#/definitions/ErrorResponse' - "500": - description: Internal Server Error - schema: - $ref: '#/definitions/ErrorResponse' - security: - - BasicAuth: [] - summary: Login for existing user - tags: - - user -securityDefinitions: - ApiKeyAuth: - in: header - name: Authorization - type: apiKey - BasicAuth: - type: basic -swagger: "2.0" diff --git a/content/v1/deployment/troubleshooting.md b/content/v1/deployment/troubleshooting.md deleted file mode 100644 index 60149d0e44..0000000000 --- a/content/v1/deployment/troubleshooting.md +++ /dev/null @@ -1,41 +0,0 @@ ---- -title: "Troubleshooting" -linkTitle: "Troubleshooting" -weight: 4 -Description: > - Troubleshooting guide ---- - -## Frequently Asked Questions -1. [Why does the installation fail due to an invalid cipherKey value?](#why-does-the-installation-fail-due-to-an-invalid-cipherkey-value) -2. [Why does the cluster-init pod show the error "cluster has already been initialized"?](#why-does-the-cluster-init-pod-show-the-error-cluster-has-already-been-initialized) -3. [Why does the precheck fail when creating an application?](#why-does-the-precheck-fail-when-creating-an-application) -4. [How can I view detailed logs for the CSM Installer?](#how-can-i-view-detailed-logs-for-the-csm-installer) -5. [After deleting an application, why can't I re-create the same application?](#after-deleting-an-application-why-cant-i-re-create-the-same-application) - -### Why does the installation fail due to an invalid cipherKey value? -The `cipherKey` value used during deployment of the CSM Installer must be exactly 32 characters in length and contained within quotes. - -### Why does the cluster-init pod show the error "cluster has already been initialized"? -During the initial start-up of the CSM Installer, the database will be initialized by the cluster-init job. If the CSM Installer is uninstalled and then re-installed on the same cluster, this error may be shown due to the Persistent Volume for the database already containing an initialized database. The CSM Installer will function as normal and the cluster-init job can be ignored. - -If a clean installation of the CSM Installer is required, the `dbVolumeDirectory` (default location `/var/lib/cockroachdb`) must be deleted from the worker node which is hosting the Persistent Volume. After this directory is deleted, the CSM Installer can be re-installed. - -Caution: Deleting the `dbVolumeDirectory` location will remove any data persisted by the CSM Installer including clusters, storage systems, and installed applications. - -### Why does the precheck fail when creating an application? -Each CSI Driver and CSM Module has required software or CRDs that must be installed before the application can be deployed in the cluster. These prechecks are verified when the `csm create application` command is executed. If the error message "create application failed" is displayed, [review the CSM Installer logs](#how-can-i-view-detailed-logs-for-the-csm-installer) to view details about the failed prechecks. - -If the precheck fails due to required software (e.g. iSCSI, NFS, SDC) not installed on the cluster nodes, follow these steps to address the issue: -1. Delete the cluster from the CSM Installer using the `csm delete cluster` command. -2. Update the nodes in the cluster by installing required software. -3. Add the cluster to the CSM Installer using the `csm add cluster` command. - -### How can I view detailed logs for the CSM Installer? -Detailed logs of the CSM Installer can be displayed using the following command: -``` -kubectl logs -f -n deploy/dell-csm-installer -``` - -### After deleting an application, why can't I re-create the same application? -After deleting an application using the `csm delete application` command, the namespace and other non-application resources including Secrets are not deleted from the cluster. This is to prevent removing any resources that may not have been created by the CSM Installer. The namespace must be manually deleted before attempting to re-create the same application using the CSM Installer. diff --git a/content/v1/observability/_index.md b/content/v1/observability/_index.md index 6b3ff27be8..8f9f05fc63 100644 --- a/content/v1/observability/_index.md +++ b/content/v1/observability/_index.md @@ -29,7 +29,7 @@ CSM for Observability is composed of several services, each living in its own Gi CSM for Observability provides the following capabilities: {{}} -| Capability | PowerMax | PowerFlex | Unity | PowerScale | PowerStore | +| Capability | PowerMax | PowerFlex | Unity XT | PowerScale | PowerStore | | - | :-: | :-: | :-: | :-: | :-: | | Collect and expose Volume Metrics via the OpenTelemetry Collector | no | yes | no | no | yes | | Collect and expose File System Metrics via the OpenTelemetry Collector | no | no | no | no | yes | @@ -46,8 +46,8 @@ CSM for Observability provides the following capabilities: {{
}} | COP/OS | Supported Versions | |-|-| -| Kubernetes | 1.21, 1.22, 1.23 | -| Red Hat OpenShift | 4.8, 4.9 | +| Kubernetes | 1.22, 1.23, 1.24 | +| Red Hat OpenShift | 4.9, 4.10 | | Rancher Kubernetes Engine | yes | | RHEL | 7.x, 8.x | | CentOS | 7.8, 7.9 | @@ -67,8 +67,8 @@ CSM for Observability supports the following CSI drivers and versions. {{
}} | Storage Array | CSI Driver | Supported Versions | | ------------- | ---------- | ------------------ | -| CSI Driver for Dell PowerFlex | [csi-powerflex](https://github.com/dell/csi-powerflex) | v2.0, v2.1, v2.2 | -| CSI Driver for Dell PowerStore | [csi-powerstore](https://github.com/dell/csi-powerstore) | v2.0, v2.1, v2.2 | +| CSI Driver for Dell PowerFlex | [csi-powerflex](https://github.com/dell/csi-powerflex) | v2.0 + | +| CSI Driver for Dell PowerStore | [csi-powerstore](https://github.com/dell/csi-powerstore) | v2.0 + | {{
}} ## Topology Data @@ -79,7 +79,7 @@ CSM for Observability provides Kubernetes administrators with the topology data | -------------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------- | | Namespace | The namespace associated with the persistent volume claim | | Persistent Volume | The name of the persistent volume | -| Status | The status of the persistent volume. "Released" indicating the persistent volume has a claim. "Bound" indicating the persistent volume has a claim | +| Status | The status of the persistent volume. "Released" indicates the persistent volume does not have a claim. "Bound" indicates the persistent volume has a claim | | Persistent Volume Claim | The name of the persistent volume claim associated with the persistent volume | | CSI Driver | The name of the CSI driver that was responsible for provisioning the volume on the storage system | | Created | The date the persistent volume was created | diff --git a/content/v1/observability/deployment/_index.md b/content/v1/observability/deployment/_index.md index 9a5d6f2566..50efaa2c3f 100644 --- a/content/v1/observability/deployment/_index.md +++ b/content/v1/observability/deployment/_index.md @@ -30,7 +30,7 @@ The Prometheus service should be running on the same Kubernetes cluster as the C | Supported Version | Image | Helm Chart | | ----------------- | ----------------------- | ------------------------------------------------------------ | -| 2.23.0 | prom/prometheus:v2.23.0 | [Prometheus Helm chart](https://github.com/prometheus-community/helm-charts/tree/main/charts/prometheus) | +| 2.34.0 | prom/prometheus:v2.34.0 | [Prometheus Helm chart](https://github.com/prometheus-community/helm-charts/tree/main/charts/prometheus) | **Note**: It is the user's responsibility to provide persistent storage for Prometheus if they want to preserve historical data. @@ -57,7 +57,7 @@ Here is a sample minimal configuration for Prometheus. Please note that the conf enabled: true image: repository: quay.io/prometheus/prometheus - tag: v2.23.0 + tag: v2.34.0 pullPolicy: IfNotPresent persistentVolume: enabled: false @@ -119,7 +119,7 @@ The Grafana dashboards require Grafana to be deployed in the same Kubernetes clu | Supported Version | Helm Chart | | ----------------- | --------------------------------------------------------- | -| 7.3.0-7.3.2 | [Grafana Helm chart](https://github.com/grafana/helm-charts/tree/main/charts/grafana) | +| 8.5.0 | [Grafana Helm chart](https://github.com/grafana/helm-charts/tree/main/charts/grafana) | Grafana must be configured with the following data sources/plugins: @@ -191,7 +191,7 @@ Below are the steps to deploy a new Grafana instance into your Kubernetes cluste # grafana-values.yaml image: repository: grafana/grafana - tag: 7.3.0 + tag: 8.5.0 sha: "" pullPolicy: IfNotPresent service: @@ -242,11 +242,11 @@ Below are the steps to deploy a new Grafana instance into your Kubernetes cluste ## Additional grafana server CofigMap mounts ## Defines additional mounts with CofigMap. CofigMap must be manually created in the namespace. extraConfigmapMounts: [] # If you created a ConfigMap on the previous step, delete [] and uncomment the lines below - # - name: certs-configmap - # mountPath: /etc/ssl/certs/ca-certificates.crt - # subPath: ca-certificates.crt - # configMap: certs-configmap - # readOnly: true + # - name: certs-configmap + # mountPath: /etc/ssl/certs/ca-certificates.crt + # subPath: ca-certificates.crt + # configMap: certs-configmap + # readOnly: true ``` 3. Add the Grafana Helm chart repository. diff --git a/content/v1/observability/deployment/helm.md b/content/v1/observability/deployment/helm.md index 6d76f8216f..02feb6186f 100644 --- a/content/v1/observability/deployment/helm.md +++ b/content/v1/observability/deployment/helm.md @@ -28,7 +28,7 @@ The Container Storage Modules (CSM) for Observability Helm chart bootstraps an O `kubectl get secret vxflexos-config -n [CSI_DRIVER_NAMESPACE] -o yaml | sed 's/namespace: [CSI_DRIVER_NAMESPACE]/namespace: [CSM_NAMESPACE]/' | kubectl create -f -` - If [CSM for Authorization is enabled](../../../authorization/deployment/#configuring-a-dell-emc-csi-driver-with-csm-for-authorization) for CSI PowerFlex, perform the following steps: + If [CSM for Authorization is enabled](../../../authorization/deployment/#configuring-a-dell-csi-driver-with-csm-for-authorization) for CSI PowerFlex, perform the following steps: 2. Copy the driver configuration parameters ConfigMap from the CSI PowerFlex namespace into the CSM for Observability namespace: diff --git a/content/v1/observability/deployment/offline.md b/content/v1/observability/deployment/offline.md index 076921deb0..b4c5ccd9d6 100644 --- a/content/v1/observability/deployment/offline.md +++ b/content/v1/observability/deployment/offline.md @@ -130,7 +130,7 @@ To perform an offline installation of a Helm chart, the following steps should b [user@anothersystem /home/user/offline-karavi-observability-bundle/helm]# kubectl get secret vxflexos-config -n [CSI_DRIVER_NAMESPACE] -o yaml | sed 's/namespace: [CSI_DRIVER_NAMESPACE]/namespace: [CSM_NAMESPACE]/' | kubectl create -f - ``` - If [CSM for Authorization is enabled](../../../authorization/deployment/#configuring-a-dell-emc-csi-driver-with-csm-for-authorization) for CSI PowerFlex, perform the following steps: + If [CSM for Authorization is enabled](../../../authorization/deployment/#configuring-a-dell-csi-driver-with-csm-for-authorization) for CSI PowerFlex, perform these steps: ``` [user@anothersystem /home/user/offline-karavi-observability-bundle/helm]# kubectl get configmap vxflexos-config-params -n [CSI_DRIVER_NAMESPACE] -o yaml | sed 's/namespace: [CSI_DRIVER_NAMESPACE]/namespace: [CSM_NAMESPACE]/' | kubectl create -f - diff --git a/content/v1/observability/release/_index.md b/content/v1/observability/release/_index.md new file mode 100644 index 0000000000..84a9c87ea2 --- /dev/null +++ b/content/v1/observability/release/_index.md @@ -0,0 +1,19 @@ +--- +title: "Release notes" +linkTitle: "Release notes" +weight: 5 +Description: > + Dell Container Storage Modules (CSM) release notes for observability +--- + +## Release Notes - CSM Observability 1.2.0 + +### New Features/Changes + +### Fixed Issues + +- [PowerStore Grafana dashboard does not populate correctly ](https://github.com/dell/csm/issues/279) +- [Grafana installation script - prometheus address is incorrect](https://github.com/dell/csm/issues/278) +- [prometheus-values.yaml error in json](https://github.com/dell/csm/issues/259) + +### Known Issues \ No newline at end of file diff --git a/content/v1/FAQ/_index.md b/content/v1/references/FAQ/_index.md similarity index 99% rename from content/v1/FAQ/_index.md rename to content/v1/references/FAQ/_index.md index 39ffd7d493..b1fc7aabe0 100644 --- a/content/v1/FAQ/_index.md +++ b/content/v1/references/FAQ/_index.md @@ -2,7 +2,7 @@ title: "CSM FAQ" linktitle: "FAQ" description: Frequently asked questions of Dell Technologies (Dell) Container Storage Modules -weight: 2 +weight: 1 --- - [What are Dell Container Storage Modules (CSM)? How different is it from a CSI driver?](#what-are-dell-container-storage-modules-csm-how-different-is-it-from-a-csi-driver) diff --git a/content/v1/references/_index.md b/content/v1/references/_index.md new file mode 100644 index 0000000000..28cae60329 --- /dev/null +++ b/content/v1/references/_index.md @@ -0,0 +1,7 @@ +--- +title: "References" +linkTitle: "References" +weight: 13 +Description: > + Dell Technologies (Dell) Container Storage Modules (CSM) References +--- diff --git a/content/v1/contributionguidelines/_index.md b/content/v1/references/contributionguidelines/_index.md similarity index 99% rename from content/v1/contributionguidelines/_index.md rename to content/v1/references/contributionguidelines/_index.md index e02b519065..427bd231af 100644 --- a/content/v1/contributionguidelines/_index.md +++ b/content/v1/references/contributionguidelines/_index.md @@ -1,7 +1,7 @@ --- title: "Contribution Guidelines" linkTitle: "Contribution Guidelines" -weight: 12 +weight: 3 Description: > Dell Technologies (Dell) Container Storage Modules (CSM) docs Contribution Guidelines --- diff --git a/content/v1/grasp/_index.md b/content/v1/references/learn/_index.md similarity index 88% rename from content/v1/grasp/_index.md rename to content/v1/references/learn/_index.md index f81a8d8e68..9facbd2d26 100644 --- a/content/v1/grasp/_index.md +++ b/content/v1/references/learn/_index.md @@ -1,5 +1,5 @@ --- title: Learn Description: Brief tutorials on Devops, Kubernetes and containers -weight: 10 +weight: 2 --- diff --git a/content/v1/grasp/start.md b/content/v1/references/learn/start.md similarity index 100% rename from content/v1/grasp/start.md rename to content/v1/references/learn/start.md diff --git a/content/v1/grasp/video.md b/content/v1/references/learn/video.md similarity index 100% rename from content/v1/grasp/video.md rename to content/v1/references/learn/video.md diff --git a/content/v1/references/policies/_index.md b/content/v1/references/policies/_index.md new file mode 100644 index 0000000000..a5e2875d16 --- /dev/null +++ b/content/v1/references/policies/_index.md @@ -0,0 +1,7 @@ +--- +title: "Policies" +linkTitle: "Policies" +weight: 4 +Description: > + Dell Technologies (Dell) Container Storage Modules (CSM) Policies +--- diff --git a/content/v1/policies/deprecationpolicy/_index.md b/content/v1/references/policies/deprecationpolicy/_index.md similarity index 100% rename from content/v1/policies/deprecationpolicy/_index.md rename to content/v1/references/policies/deprecationpolicy/_index.md diff --git a/content/v1/release/_index.md b/content/v1/release/_index.md new file mode 100644 index 0000000000..97a5c32dc9 --- /dev/null +++ b/content/v1/release/_index.md @@ -0,0 +1,19 @@ +--- +title: "Release notes" +linkTitle: "Release notes" +weight: 10 +Description: > + Dell Container Storage Modules (CSM) release notes +--- + +Release notes for Container Storage Modules: + +[CSI Drivers](../csidriver/release) + +[CSM for Authorization](../authorization/release) + +[CSM for Observability](../observability/release) + +[CSM for Replication](../replication/release) + +[CSM for Resiliency](../resiliency/release) \ No newline at end of file diff --git a/content/v1/replication/_index.md b/content/v1/replication/_index.md index cae6e7d45d..df4d1bb45c 100644 --- a/content/v1/replication/_index.md +++ b/content/v1/replication/_index.md @@ -30,8 +30,8 @@ CSM for Replication provides the following capabilities: {{}} | COP/OS | PowerMax | PowerStore | PowerScale | |---------------|------------------|------------------|------------| -| Kubernetes | 1.21, 1.22, 1.23 | 1.21, 1.22, 1.23 | 1.21, 1.22, 1.23 | -| Red Hat OpenShift | 4.8, 4.9 | 4.8, 4.9 | 4.8, 4.9 | +| Kubernetes | 1.22, 1.23, 1.24 | 1.22, 1.23, 1.24 | 1.22, 1.23, 1.24 | +| Red Hat OpenShift | 4.9, 4.10 | 4.9, 4.10 | 4.9, 4.10 | | RHEL | 7.x, 8.x | 7.x, 8.x | 7.x, 8.x | | CentOS | 7.8, 7.9 | 7.8, 7.9 | 7.8, 7.9 | | Ubuntu | 20.04 | 20.04 | 20.04 | @@ -50,11 +50,11 @@ CSM for Replication provides the following capabilities: CSM for Replication supports the following CSI drivers and versions. {{
}} -| Storage Array | CSI Driver | Supported Versions | -| ------------------------------ | -------------------------------------------------------- | ------------------ | -| CSI Driver for Dell PowerMax | [csi-powermax](https://github.com/dell/csi-powermax) | v2.0, v2.1, v2.2 | -| CSI Driver for Dell PowerStore | [csi-powerstore](https://github.com/dell/csi-powerstore) | v2.0, v2.1, v2.2 | -| CSI Driver for Dell PowerScale | [csi-powerscale](https://github.com/dell/csi-powerscale) | v2.2 | +| Storage Array | CSI Driver | Supported Versions | +| ------------- | ---------- | ------------------ | +| CSI Driver for Dell PowerMax | [csi-powermax](https://github.com/dell/csi-powermax) | v2.0 + | +| CSI Driver for Dell PowerStore | [csi-powerstore](https://github.com/dell/csi-powerstore) | v2.0 + | +| CSI Driver for Dell PowerScale | [csi-powerscale](https://github.com/dell/csi-powerscale) | v2.2 + | {{
}} ## Details @@ -74,6 +74,8 @@ the objects still exist in pairs. * Start applications after the migration. * Replicate `PersistentVolumeClaim` objects within/across clusters. * Replication with METRO mode does not need Replicator sidecar and common controller. +* Different namespaces cannot share the same RDF group for creating volumes with ASYNC mode for PowerMax. +* Same RDF group cannot be shared across different replication modes for PowerMax. ### CSM for Replication Module Capabilities @@ -94,9 +96,9 @@ The following matrix provides a list of all supported versions for each Dell Sto | Platforms | PowerMax | PowerStore | PowerScale | | ---------- | ----------------- | ---------------- | ---------------- | -| Kubernetes | 1.21, 1.22, 1.23 | 1.21, 1.22, 1.23 | 1.21, 1.22, 1.23 | -| RedHat Openshift |4.8, 4.9 | 4.8, 4.9 | 4.8, 4.9 | -| CSI Driver | 2.x | 2.x | 2.2+ | +| Kubernetes | 1.22, 1.23, 1.24 | 1.22, 1.23, 1.24 | 1.22, 1.23, 1.24 | +| RedHat Openshift |4.9, 4.10 | 4.9, 4.10 | 4.9, 4.10 | +| CSI Driver | 2.x(k8s),
2.2+(OpenShift)| 2.x | 2.2+ | For compatibility with storage arrays please refer to corresponding [CSI drivers](../csidriver/#features-and-capabilities) diff --git a/content/v1/replication/deployment/installation.md b/content/v1/replication/deployment/installation.md index 3a30e17f5e..005637fac7 100644 --- a/content/v1/replication/deployment/installation.md +++ b/content/v1/replication/deployment/installation.md @@ -47,12 +47,15 @@ kubectl create ns dell-replication-controller cp ../helm/csm-replication/values.yaml ./myvalues.yaml bash scripts/install.sh --values ./myvalues.yaml ``` ->Note: Current installation method allows you to specify custom `:` entry to be appended to controller's `/etc/hosts` file. It can be useful if controller is being deployed in private environment where DNS is not set up properly, but kubernetes clusters use FQDN as API server's address. +>Note: Current installation method allows you to specify custom `:` entries to be appended to controller's `/etc/hosts` file. It can be useful if controller is being deployed in private environment where DNS is not set up properly, but kubernetes clusters use FQDN as API server's address. > The feature can be enabled by modifying `values.yaml`. >``` hostAliases: -> enableHostAliases: true -> hostName: "foo.bar" -> ip: "10.10.10.10" +> - ip: "10.10.10.10" +> hostnames: +> - "foo.bar" +> - ip: "10.10.10.11" +> hostnames: +> - "foo.baz" This script will do the following: 1. Install `DellCSIReplicationGroup` CRD in your cluster diff --git a/content/v1/replication/high-availability.md b/content/v1/replication/high-availability.md index 447036e440..1f2d9b7fe2 100644 --- a/content/v1/replication/high-availability.md +++ b/content/v1/replication/high-availability.md @@ -46,6 +46,9 @@ reclaimPolicy: Delete volumeBindingMode: Immediate ``` +> Note: Different namespaces can share the same RDF group for creating volumes. + + ### Snapshots on SRDF Metro volumes A snapshot can be created on either of the volumes in the metro volume pair depending on the parameters in the `VolumeSnapshotClass`. The snapshots are by default created on the volumes on the R1 side of the SRDF metro pair, but if a Symmetrix id is specified in the `VolumeSnapshotClass` parameters, the driver creates the snapshot on the specified array; the specified array can either be the R1 or the R2 array. A `VolumeSnapshotClass` with symmetrix id specified in parameters may look as follows: @@ -59,4 +62,4 @@ driver: driver.dellemc.com deletionPolicy: Delete parameters: SYMID: '000000000001' -``` \ No newline at end of file +``` diff --git a/content/v1/replication/migrating-volumes.md b/content/v1/replication/migrating-volumes.md new file mode 100644 index 0000000000..da524dc314 --- /dev/null +++ b/content/v1/replication/migrating-volumes.md @@ -0,0 +1,145 @@ +--- +title: Migrating Volumes +linktitle: Migrating Volumes +weight: 6 +description: > + Migrating Volumes Between Storage Classes +--- + +You can migrate existing pre-provisioned volumes to another storage class by using volume migration feature. + +As of CSM 1.3 two versions of migration are supported: +- To replicated storage class from NON replicated one +- To NON replicated storage class from replicated one + +## Prerequisites +- Original volume is from the one of currently supported CSI drivers (see Support Matrix) +- Migrated sidecar is installed alongside with the driver, you can enable it in your `myvalues.yaml` file +```yaml +migration: + enabled: true +``` + +## Support Matrix +| Migration Type | PowerMax | PowerStore | PowerScale | PowerFlex | Unity | +| - | - | - | - | - | - | +| NON_REPL_TO_REPL | Yes | No | No | No | No | +| REPL_TO_NON_REPL | Yes | No | No | No | No | + + +## Basic Usage + +To trigger migration procedure, you need to patch existing PersistentVolume with migration annotation (by default `migration.storage.dell.com/migrate-to`) and in value of said annotation specify StorageClass name you want to migrate to. + +For example, if we have PV named `test-pv` already provisioned and we want to migrate it to replicated storage class named `powermax-replication` we can run: + +```shell +kubectl patch pv test-pv -p '{"metadata": {"annotations":{"migration.storage.dell.com/migrate-to":"powermax-replication"}}}' +``` + +Patching PV resource will trigger migration sidecar that will call `VolumeMigrate` call from the CSI driver. After migration is finished new PersistentVolume will be created in cluster with name of original PV plus `-to-` appended to it. + +In our example, we will see this when running `kubectl get pv`: +```shell +NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE +test-pv 1Gi RWO Retain Bound default/test-pvc powermax 5m +test-pv-to-powermax-replication 1Gi RWO Retain Available powermax-replication 10s + +``` + +When Volume Migration is finished source PV will be updated with EVENT that denotes that this has taken place. + +Newly created PV (`test-pv-to-powermax-replication` in our example) is available for consumption via static provisioning by any PVC that will request it. + + +## Namespace Considerations For Replication + +Replication Groups in CSM Replication can be made namespaced, meaning that one SC will generate one Replication Group per namespace. This is also important when migrating volumes from/to replcation storage class. + +When just setting one annotation `migration.storage.dell.com/migrate-to` migrated volume is assumed to be used in same namespace as original PV and it's PVC. In the case of being migrated to replication enabled storage class will be inserted in namespaced Replication Group inside PVC namespace. + +However, you can define in which namespace migrated volume must be used after migration by setting `migration.storage.dell.com/namespace`. You can use the same annotation in a scenario where you only have a statically provisioned PV, and you don't have it bound to any PVC, and you want to migrate it to another storage class. + + +## Non Disruptive Migration + +You can migrate your PVs without disrupting workflows if you use StatefulSet with multiple replicas to deploy application. + +Instruction (you can also use `repctl` for convenience): + +1. Find every PV for your StatefulSet and patch it with `migration.storage.dell.com/migrate-to` annotation that points to new storage class +```shell +kubectl patch pv -p '{"metadata": {"annotations":{"migration.storage.dell.com/migrate-to":"powermax-replication"}}}' +``` + +2. Ensure you have a copy of StatefulSet manifest somewhere ready, we will need it later. If you don't have it, you can get it from cluster +```shell +kubectl get sts -n -o yaml > sts-manifest.yaml +``` + +3. To not disrupt any workflows we will need to delete StatefulSet without deleting any pods, to do so you can use `--cascade` flag +```shell +kubectl delete sts -n --cascade=orphan +``` + +4. Change StorageClass in your manifest of StatefulSet to point to a new storage class, then apply it to the cluster +```shell +kubectl apply -f sts-manifest.yaml +``` + +5. Find a PVC and pod of one replica of StatefulSet delete PVCs first and Pod after it +```shell +kubectl delete pvc -n +``` +```shell +kubectl delete pod -n +``` + +Wait for new pod to be created by StatefulSet, it should create new PVC that will use migrated PV. + +6. Repeat step 5 until all replicas use new PVCs + + +## Using repctl + +You can use `repctl` CLI tool to help you simplify running migration specific commands. + +### Single PV + +In most its basic form repctl can do the same as kubectl, for example, migrating single PV from our example will look like: + +```shell +./repctl migrate pv test-pv --to-sc powermax-replication +``` + +`repctl` will go and patch the resource for you. You can also provide `--wait` flag for it to wait until migrated PV is created in cluster. +`repctl` also can set `migration.storage.dell.com/namespace` for you if you provide `--target-ns` flag. + + +Aside from just migrating single PVs repctl can migrate PVCs and StatefulSets. + +### PVC + +`repctl` can find PV for any given PVC for you and patch it. +This could be done with similar command to single PV migration: + +```shell +./repctl migrate pvc test-pvc --to-sc powermax-replication -n default +``` + +Notice that we provide original namespace (`default` in our example) for this command because PVCs are namespaced resource and we need namespace to be able to find it. + + +### StatefulSet + + +`repctl` can help you migrate entire StatefulSet by automating migration process. + +You can use this command to do so: +```shell +./repctl migrate sts test-sts --to-sc powermax-replication -n default +``` + +By default, it will find every Pod, PVC and PV for provided StatefulSet and patch every PV with annotation. + +You can also optionally provide `--ndu` flag, with this flag provided repctl will do steps provided in [Non Disruptive Migration](#non-disruptive-migration) section automatically. diff --git a/content/v1/replication/release/_index.md b/content/v1/replication/release/_index.md new file mode 100644 index 0000000000..9d19354c4f --- /dev/null +++ b/content/v1/replication/release/_index.md @@ -0,0 +1,26 @@ +--- +title: "Release notes" +linkTitle: "Release notes" +weight: 9 +Description: > + Dell Container Storage Modules (CSM) release notes for replication +--- + +## Release Notes - CSM Replication 1.3.0 + +### New Features/Changes +- Added support for Kubernetes 1.24 +- Added support for OpenShift 4.10 +- Added volume upgrade/downgrade functionality for replication volumes + + +### Fixed Issues +- Fixed panic occuring when encountering PVC with empty StorageClass +- PV and RG retention policy checks are no longer case sensitive +- RG will now display EMPTY link state when no PV found +- [`PowerScale`] Running `reprotect` action on source cluster after failover no longer puts RG into UNKNOWN state +- [`PowerScale`] Deleting RG will break replication link before trying to delete group on array + +### Known Issues + +There are no known issues in this release. diff --git a/content/v1/resiliency/_index.md b/content/v1/resiliency/_index.md index 7ccb890831..ab043bc23d 100644 --- a/content/v1/resiliency/_index.md +++ b/content/v1/resiliency/_index.md @@ -27,30 +27,30 @@ Accordingly, CSM for Resiliency is adapted to and qualified with each CSI driver CSM for Resiliency provides the following capabilities: {{}} -| Capability | PowerScale | Unity | PowerStore | PowerFlex | PowerMax | -| --------------------------------------- | :--------: | :---: | :--------: | :-------: | :------: | -| Detect pod failures when: Node failure, K8S Control Plane Network failure, K8S Control Plane failure, Array I/O Network failure | no | yes | no | yes | no | -| Cleanup pod artifacts from failed nodes | no | yes | no | yes | no | -| Revoke PV access from failed nodes | no | yes | no | yes | no | +| Capability | PowerScale | Unity XT | PowerStore | PowerFlex | PowerMax | +| --------------------------------------- | :--------: | :------: | :--------: | :-------: | :------: | +| Detect pod failures when: Node failure, K8S Control Plane Network failure, K8S Control Plane failure, Array I/O Network failure | yes | yes | no | yes | no | +| Cleanup pod artifacts from failed nodes | yes | yes | no | yes | no | +| Revoke PV access from failed nodes | yes | yes | no | yes | no | {{
}} ## Supported Operating Systems/Container Orchestrator Platforms {{}} -| COP/OS | Supported Versions | -| ---------- | :----------------: | -| Kubernetes | 1.21, 1.22, 1.23 | -| Red Hat OpenShift | 4.8, 4.9 | -| RHEL | 7.x, 8.x | -| CentOS | 7.8, 7.9 | +| COP/OS | Supported Versions | +| ----------------- | :----------------: | +| Kubernetes | 1.22, 1.23, 1.24 | +| Red Hat OpenShift | 4.9, 4.10 | +| RHEL | 7.x, 8.x | +| CentOS | 7.8, 7.9 | {{
}} ## Supported Storage Platforms {{}} -| | PowerFlex | Unity | -| ------------- | :----------: | :------------------------: | -| Storage Array | 3.5.x, 3.6.x | 5.0.5, 5.0.6, 5.0.7, 5.1.0, 5.1.2 | +| | PowerFlex | Unity XT | PowerScale | +| ------------- | :----------: | :-------------------------------: | :-------------------------------------: | +| Storage Array | 3.5.x, 3.6.x | 5.0.5, 5.0.6, 5.0.7, 5.1.0, 5.1.2 | OneFS 8.1, 8.2, 9.0, 9.1, 9.2, 9.3, 9.4 | {{
}} ## Supported CSI Drivers @@ -59,30 +59,39 @@ CSM for Resiliency supports the following CSI drivers and versions. {{}} | Storage Array | CSI Driver | Supported Versions | | --------------------------------- | :----------: | :----------------: | -| CSI Driver for Dell PowerFlex | [csi-powerflex](https://github.com/dell/csi-powerflex) | v2.0, v2.1, v2.2 | -| CSI Driver for Dell Unity | [csi-unity](https://github.com/dell/csi-unity) | v2.0, v2.1, v2.2 | +| CSI Driver for Dell PowerFlex | [csi-powerflex](https://github.com/dell/csi-powerflex) | v2.0.0 + | +| CSI Driver for Dell Unity XT | [csi-unity](https://github.com/dell/csi-unity) | v2.0.0 + | +| CSI Driver for Dell PowerScale | [csi-powerscale](https://github.com/dell/csi-powerscale) | v2.3.0 + | {{
}} ### PowerFlex Support -PowerFlex is a highly scalable array that is very well suited to Kubernetes deployments. The CSM for Resiliency support for PowerFlex leverages the following PowerFlex features: +PowerFlex is a highly scalable array that is very well suited to Kubernetes deployments. The CSM for Resiliency support for PowerFlex leverages these PowerFlex features: * Very quick detection of Array I/O Network Connectivity status changes (generally takes 1-2 seconds for the array to detect changes) * A robust mechanism if Nodes are doing I/O to volumes (sampled over a 5-second period). * Low latency REST API supports fast CSI provisioning and de-provisioning operations. * A proprietary network protocol provided by the SDC component that can run over the same IP interface as the K8S control plane or over a separate IP interface for Array I/O. -### Unity Support +### Unity XT Support -Dell Unity is targeted for midsized deployments, remote or branch offices, and cost-sensitive mixed workloads. Unity systems are designed for all-Flash, deliver the best value in the market, and are available in purpose-built (all Flash or hybrid Flash), converged deployment options (through VxBlock), and software-defined virtual edition. +Dell Unity XT is targeted for midsized deployments, remote or branch offices, and cost-sensitive mixed workloads. Unity XT systems are designed to deliver the best value in the market. They support all-Flash, and are available in purpose-built (all Flash or hybrid Flash), converged deployment options (through VxBlock), and software-defined virtual edition. -* Unity (purpose-built): A modern midrange storage solution, engineered from the groundup to meet market demands for Flash, affordability and incredible simplicity. The Unity Family is available in 12 All Flash models and 12 Hybrid models. -* VxBlock (converged): Unity storage options are also available in Dell VxBlock System 1000. -* UnityVSA (virtual): The Unity Virtual Storage Appliance (VSA) allows the advanced unified storage and data management features of the Unity family to be easily deployed on VMware ESXi servers, for a ‘software defined’ approach. UnityVSA is available in two editions: +* Unity XT (purpose-built): A modern midrange storage solution, engineered from the groundup to meet market demands for Flash, affordability and incredible simplicity. The Unity XT Family is available in 12 All Flash models and 12 Hybrid models. +* VxBlock (converged): Unity XT storage options are also available in Dell VxBlock System 1000. +* UnityVSA (virtual): The Unity XT Virtual Storage Appliance (VSA) allows the advanced unified storage and data management features of the Unity XT family to be easily deployed on VMware ESXi servers. This allows for a ‘software defined’ approach. UnityVSA is available in two editions: * Community Edition is a free downloadable 4 TB solution recommended for nonproduction use. * Professional Edition is a licensed subscription-based offering available at capacity levels of 10 TB, 25 TB, and 50 TB. The subscription includes access to online support resources, EMC Secure Remote Services (ESRS), and on-call software- and systems-related support. -All three deployment options, i.e. Unity, UnityVSA, and Unity-based VxBlock, enjoy one architecture, one interface with consistent features and rich data services. +All three deployment options, Unity XT, UnityVSA, and Unity-based VxBlock, enjoy one architecture, one interface with consistent features and rich data services. + +### PowerScale Support + +PowerScale is a highly scalable NFS array that is very well suited to Kubernetes deployments. The CSM for Resiliency support for PowerScale leverages the following PowerScale features: + +* Detection of Array I/O Network Connectivity status changes. +* A robust mechanism to detect if Nodes are actively doing I/O to volumes. +* Low latency REST API supports fast CSI provisioning and de-provisioning operations. ## Limitations and Exclusions @@ -97,11 +106,11 @@ The following provisioning types are supported and have been tested: * Use of the above volumes with Pods created by StatefulSets. * Up to 12 or so protected pods on a given node. * Failing up to 3 nodes at a time in 9 worker node clusters, or failing 1 node at a time in smaller clusters. Application recovery times are dependent on the number of pods that need to be moved as a result of the failure. See the section on "Testing and Performance" for some of the details. +* Multi-array are supported. In case of CSI Driver for PowerScale and CSI Driver for Unity, if any one of the array is not connected, the array connectivity will be false. CSI Driver for Powerflex connectivity will be determined by connection to default array. ### Not Tested But Assumed to Work * Deployments with the above volume types, provided two pods from the same deployment do not reside on the same node. At the current time anti-affinity rules should be used to guarantee no two pods accessing the same volumes are scheduled to the same node. -* Multi-array support ### Not Yet Tested or Supported diff --git a/content/v1/resiliency/deployment.md b/content/v1/resiliency/deployment.md index 6da570dfd5..8a4a20519f 100644 --- a/content/v1/resiliency/deployment.md +++ b/content/v1/resiliency/deployment.md @@ -10,7 +10,9 @@ CSM for Resiliency is installed as part of the Dell CSI driver installation. The For information on the PowerFlex CSI driver, see [PowerFlex CSI Driver](https://github.com/dell/csi-powerflex). -For information on the Unity CSI driver, see [Unity CSI Driver](https://github.com/dell/csi-unity). +For information on the Unity XT CSI driver, see [Unity XT CSI Driver](https://github.com/dell/csi-unity). + +For information on the PowerScale CSI driver, see [PowerScale CSI Driver](https://github.com/dell/csi-powerscale). Configure all the helm chart parameters described below before installing the drivers. @@ -23,7 +25,7 @@ The drivers that support Helm chart installation allow CSM for Resiliency to be # Enable this feature only after contact support for additional information podmon: enabled: true - image: dellemc/podmon:v1.1.0 + image: dellemc/podmon:v1.2.0 controller: args: - "--csisock=unix:/var/run/csi/csi.sock" @@ -31,6 +33,7 @@ podmon: - "--mode=controller" - "--skipArrayConnectionValidation=false" - "--driver-config-params=/vxflexos-config-params/driver-config-params.yaml" + - "--driverPodLabelValue=dell-storage" node: args: - "--csisock=unix:/var/lib/kubelet/plugins/vxflexos.emc.dell.com/csi_sock" @@ -38,6 +41,7 @@ podmon: - "--mode=node" - "--leaderelection=false" - "--driver-config-params=/vxflexos-config-params/driver-config-params.yaml" + - "--driverPodLabelValue=dell-storage" ``` @@ -58,8 +62,8 @@ To install CSM for Resiliency with the driver, the following changes are require | leaderelection | Required | Boolean value that should be set true for controller and false for node. The default value is true. | controller & node | | skipArrayConnectionValidation | Optional | Boolean value that if set to true will cause controllerPodCleanup to skip the validation that no I/O is ongoing before cleaning up the pod. If set to true will cause controllerPodCleanup on K8S Control Plane failure (kubelet service down). | controller | | labelKey | Optional | String value that sets the label key used to denote pods to be monitored by CSM for Resiliency. It will make life easier if this key is the same for all driver types, and drivers are differentiated by different labelValues (see below). If the label keys are the same across all drivers you can do `kubectl get pods -A -l labelKey` to find all the CSM for Resiliency protected pods. labelKey defaults to "podmon.dellemc.com/driver". | controller & node | -| labelValue | Required | String that sets the value that denotes pods to be monitored by CSM for Resiliency. This must be specific for each driver. Defaults to "csi-vxflexos" for CSI Driver for Dell PowerFlex and "csi-unity" for CSI Driver for Dell Unity | controller & node | -| arrayConnectivityPollRate | Optional | The minimum polling rate in seconds to determine if the array has connectivity to a node. Should not be set to less than 5 seconds. See the specific section for each array type for additional guidance. | controller | +| labelValue | Required | String that sets the value that denotes pods to be monitored by CSM for Resiliency. This must be specific for each driver. Defaults to "csi-vxflexos" for CSI Driver for Dell PowerFlex and "csi-unity" for CSI Driver for Dell Unity XT | controller & node | +| arrayConnectivityPollRate | Optional | The minimum polling rate in seconds to determine if the array has connectivity to a node. Should not be set to less than 5 seconds. See the specific section for each array type for additional guidance. | controller & node | | arrayConnectivityConnectionLossThreshold | Optional | Gives the number of failed connection polls that will be deemed to indicate array connectivity loss. Should not be set to less than 3. See the specific section for each array type for additional guidance. | controller | | driver-config-params | Required | String that set the path to a file containing configuration parameter(for instance, Log levels) for a driver. | controller & node | @@ -75,24 +79,26 @@ podmon: enabled: true controller: args: - - "-csisock=unix:/var/run/csi/csi.sock" - - "-labelvalue=csi-vxflexos" - - "-mode=controller" - - "-arrayConnectivityPollRate=5" - - "-arrayConnectivityConnectionLossThreshold=3" + - "--csisock=unix:/var/run/csi/csi.sock" + - "--labelvalue=csi-vxflexos" + - "--mode=controller" + - "--arrayConnectivityPollRate=5" + - "--arrayConnectivityConnectionLossThreshold=3" - "--skipArrayConnectionValidation=false" - "--driver-config-params=/vxflexos-config-params/driver-config-params.yaml" + - "--driverPodLabelValue=dell-storage" node: args: - - "-csisock=unix:/var/lib/kubelet/plugins/vxflexos.emc.dell.com/csi_sock" - - "-labelvalue=csi-vxflexos" - - "-mode=node" - - "-leaderelection=false" + - "--csisock=unix:/var/lib/kubelet/plugins/vxflexos.emc.dell.com/csi_sock" + - "--labelvalue=csi-vxflexos" + - "--mode=node" + - "--leaderelection=false" - "--driver-config-params=/vxflexos-config-params/driver-config-params.yaml" + - "--driverPodLabelValue=dell-storage" ``` -## Unity Specific Recommendations +## Unity XT Specific Recommendations Here is a typical installation used for testing: @@ -102,28 +108,60 @@ podmon: enabled: true controller: args: - - "-csisock=unix:/var/run/csi/csi.sock" - - "-labelvalue=csi-unity" - - "-driverPath=csi-unity.dellemc.com" - - "-mode=controller" + - "--csisock=unix:/var/run/csi/csi.sock" + - "--labelvalue=csi-unity" + - "--driverPath=csi-unity.dellemc.com" + - "--mode=controller" - "--skipArrayConnectionValidation=false" - "--driver-config-params=/unity-config/driver-config-params.yaml" + - "--driverPodLabelValue=dell-storage" node: args: - - "-csisock=unix:/var/lib/kubelet/plugins/unity.emc.dell.com/csi_sock" - - "-labelvalue=csi-unity" - - "-driverPath=csi-unity.dellemc.com" - - "-mode=node" - - "-leaderelection=false" + - "--csisock=unix:/var/lib/kubelet/plugins/unity.emc.dell.com/csi_sock" + - "--labelvalue=csi-unity" + - "--driverPath=csi-unity.dellemc.com" + - "--mode=node" + - "--leaderelection=false" - "--driver-config-params=/unity-config/driver-config-params.yaml" + - "--driverPodLabelValue=dell-storage" + +``` + +## PowerScale Specific Recommendations + +Here is a typical installation used for testing: +```yaml +podmon: + image: dellemc/podmon + enabled: true + controller: + args: + - "--csisock=unix:/var/run/csi/csi.sock" + - "--labelvalue=csi-isilon" + - "--arrayConnectivityPollRate=60" + - "--driverPath=csi-isilon.dellemc.com" + - "--mode=controller" + - "--skipArrayConnectionValidation=false" + - "--driver-config-params=/csi-isilon-config-params/driver-config-params.yaml" + - "--driverPodLabelValue=dell-storage" + node: + args: + - "--csisock=unix:/var/lib/kubelet/plugins/csi-isilon/csi_sock" + - "--labelvalue=csi-isilon" + - "--arrayConnectivityPollRate=60" + - "--driverPath=csi-isilon.dellemc.com" + - "--mode=node" + - "--leaderelection=false" + - "--driver-config-params=/csi-isilon-config-params/driver-config-params.yaml" + - "--driverPodLabelValue=dell-storage" ``` ## Dynamic parameters CSM for Resiliency has configuration parameters that can be updated dynamically, such as the logging level and format. This can be -done by editing the DellEMC CSI Driver's parameters ConfigMap. The ConfigMap can be queried using kubectl. -For example, the DellEMC Powerflex CSI Driver ConfigMaps can be found using the following command: `kubectl get -n vxflexos configmap`. +done by editing the Dell CSI Driver's parameters ConfigMap. The ConfigMap can be queried using kubectl. +For example, the Dell Powerflex CSI Driver ConfigMaps can be found using this command: `kubectl get -n vxflexos configmap`. The ConfigMap to edit will have this pattern: -config-params (e.g., `vxflexos-config-params`). To update or add parameters, you can use the `kubectl edit` command. For example, `kubectl edit -n vxflexos configmap vxflexos-config-params`. diff --git a/content/v1/resiliency/release/_index.md b/content/v1/resiliency/release/_index.md new file mode 100644 index 0000000000..3beec86748 --- /dev/null +++ b/content/v1/resiliency/release/_index.md @@ -0,0 +1,21 @@ +--- +title: "Release notes" +linkTitle: "Release notes" +weight: 1 +Description: > + Dell Container Storage Modules (CSM) release notes for resiliency +--- + +## Release Notes - CSM Resiliency 1.2.0 + +### New Features/Changes + +- Support for node taint when driver pod is unhealthy. +- Resiliency protection on driver node pods, see [CSI node failure protection](https://github.com/dell/csm/issues/145). +- Resiliency support for CSI Driver for PowerScale, see [CSI Driver for PowerScale](https://github.com/dell/csm/issues/262). + +### Fixed Issues + +- Occasional failure unmounting Unity volume for raw block devices via iSCSI, see [unmounting Unity volume](https://github.com/dell/csm/issues/237). + +### Known Issues \ No newline at end of file diff --git a/content/v1/resiliency/upgrade.md b/content/v1/resiliency/upgrade.md index 4466c77cc6..a8cc56a9c2 100644 --- a/content/v1/resiliency/upgrade.md +++ b/content/v1/resiliency/upgrade.md @@ -10,7 +10,9 @@ CSM for Resiliency can be upgraded as part of the Dell CSI driver upgrade proces For information on the PowerFlex CSI driver upgrade process, see [PowerFlex CSI Driver](../../csidriver/upgradation/drivers/powerflex). -For information on the Unity CSI driver upgrade process, see [Unity CSI Driver](../../csidriver/upgradation/drivers/unity). +For information on the Unity XT CSI driver upgrade process, see [Unity XT CSI Driver](../../csidriver/upgradation/drivers/unity). + +For information on the PowerScale CSI driver upgrade process, see [PowerScale CSI Driver](../../csidriver/upgradation/drivers/isilon). ## Helm Chart Upgrade diff --git a/content/v1/resiliency/usecases.md b/content/v1/resiliency/usecases.md index daac595325..22ce18aae0 100644 --- a/content/v1/resiliency/usecases.md +++ b/content/v1/resiliency/usecases.md @@ -38,3 +38,5 @@ CSM for Resiliency's design is focused on detecting the following types of hardw 3. Array I/O Network failure is detected by polling the array to determine if the array has a healthy connection to the node. The capabilities to do this vary greatly by array and communication protocol type (Fibre Channel, iSCSI, NFS, NVMe, or PowerFlex SDC IP protocol). By monitoring the Array I/O Network separately from the Control Plane Network, CSM for Resiliency has two different indicators of whether the node is healthy or not. 4. K8S Control Plane Failure. Control Plane Failure is defined as failure of kubelet in a given node. K8S Control Plane failures are generally discovered by receipt of a Node event with a NoSchedule or NoExecute taint, or detection of such a taint when retrieving the Node via the K8S API. + +5. CSI Driver node pods. CSM for Resiliency monitors CSI driver node pods.If for any reason the CSI Driver node pods fail and enter the Not Ready state, it will taint the node with NoSchedule value. This will disable kubernetes scheduler to schedule new workloads on the given node, hence avoid workloads that needed CSI Driver pods to be in Ready state. diff --git a/content/v1/snapshots/volume-group-snapshots/_index.md b/content/v1/snapshots/volume-group-snapshots/_index.md new file mode 100644 index 0000000000..c266498bef --- /dev/null +++ b/content/v1/snapshots/volume-group-snapshots/_index.md @@ -0,0 +1,51 @@ +--- +title: "Volume Group Snapshots" +linkTitle: "Volume Group Snapshots" +weight: 8 +Description: > + Volume Group Snapshot module of Dell CSI drivers +--- +## Volume Group Snapshot Feature + +In order to use Volume Group Snapshots, ensure the volume snapshot module is enabled. +- Kubernetes Volume Snapshot CRDs +- Volume Snapshot Controller +- Volume Snapshot Class + +### Creating Volume Group Snapshots +This is a sample manifest for creating a Volume Group Snapshot: +```yaml +apiVersion: volumegroup.storage.dell.com/v1 +kind: DellCsiVolumeGroupSnapshot +metadata: + name: "vgs-test" + namespace: "test" +spec: + # Add fields here + driverName: "csi-.dellemc.com" # Example: "csi-powerstore.dellemc.com" + # defines how to process VolumeSnapshot members when volume group snapshot is deleted + # "Retain" - keep VolumeSnapshot instances + # "Delete" - delete VolumeSnapshot instances + memberReclaimPolicy: "Retain" + volumesnapshotclass: "" + pvcLabel: "vgs-snap-label" + # pvcList: + # - "pvcName1" + # - "pvcName2" +``` + +The PVC labels field specifies a label that must be present in PVCs that are to be snapshotted. Here is a sample of that portion of a .yaml for a PVC: + +```yaml +metadata: + name: volume1 + namespace: test + labels: + volume-group: vgs-snap-label +``` + +More details about the installation and use of the VolumeGroup Snapshotter can be found here: [dell-csi-volumegroup-snapshotter](https://github.com/dell/csi-volumegroup-snapshotter). + +>Note: Volume group cannot be seen from the Kubernetes level as of now only volume group snapshots can be viewed as a CRD + +>Volume Group Snapshots feature is supported with Helm. diff --git a/content/v2/_index.md b/content/v2/_index.md index 68f876afee..181e677e61 100644 --- a/content/v2/_index.md +++ b/content/v2/_index.md @@ -17,23 +17,23 @@ CSM is made up of multiple components including modules (enterprise capabilities ## CSM Supported Modules and Dell CSI Drivers -| Modules/Drivers | CSM 1.2 | [CSM 1.1](../v1/) | [CSM 1.0.1](../v1/) | [CSM 1.0](../v2/) | +| Modules/Drivers | CSM 1.2.1 | [CSM 1.2](../v1/) | [CSM 1.1](../v1/) | [CSM 1.0.1](../v2/) | | - | :-: | :-: | :-: | :-: | -| [Authorization](https://hub.docker.com/r/dellemc/csm-authorization-sidecar) | 1.2 | 1.1 | 1.0 | 1.0 | -| [Observability](https://hub.docker.com/r/dellemc/csm-topology) | 1.1 | 1.0.1 | 1.0.1 | 1.0 | -| [Replication](https://hub.docker.com/r/dellemc/dell-csi-replicator) | 1.2 | 1.1 | 1.0 | 1.0 | -| [Resiliency](https://hub.docker.com/r/dellemc/podmon) | 1.1 | 1.0.1 | 1.0.1 | 1.0 | -| [CSI Driver for PowerScale](https://hub.docker.com/r/dellemc/csi-isilon/tags) | v2.2 | v2.1 | v2.0 | v2.0 | -| [CSI Driver for Unity](https://hub.docker.com/r/dellemc/csi-unity/tags) | v2.2 | v2.1 | v2.0 | v2.0 | -| [CSI Driver for PowerStore](https://hub.docker.com/r/dellemc/csi-powerstore/tags) | v2.2 | v2.1 | v2.0 | v2.0 | -| [CSI Driver for PowerFlex](https://hub.docker.com/r/dellemc/csi-vxflexos/tags) | v2.2 | v2.1 | v2.0 | v2.0 | -| [CSI Driver for PowerMax](https://hub.docker.com/r/dellemc/csi-powermax/tags) | v2.2 | v2.1 | v2.0 | v2.0 | +| [Authorization](https://hub.docker.com/r/dellemc/csm-authorization-sidecar) | 1.2 | 1.2 | 1.1 | 1.0 | +| [Observability](https://hub.docker.com/r/dellemc/csm-topology) | 1.1.1 | 1.1 | 1.0.1 | 1.0.1 | +| [Replication](https://hub.docker.com/r/dellemc/dell-csi-replicator) | 1.2 | 1.2 | 1.1 | 1.0 | +| [Resiliency](https://hub.docker.com/r/dellemc/podmon) | 1.1 | 1.1 | 1.0.1 | 1.0.1 | +| [CSI Driver for PowerScale](https://hub.docker.com/r/dellemc/csi-isilon/tags) | v2.2 | v2.2 | v2.1 | v2.0 | +| [CSI Driver for Unity](https://hub.docker.com/r/dellemc/csi-unity/tags) | v2.2 | v2.2 | v2.1 | v2.0 | +| [CSI Driver for PowerStore](https://hub.docker.com/r/dellemc/csi-powerstore/tags) | v2.2 | v2.2 | v2.1 | v2.0 | +| [CSI Driver for PowerFlex](https://hub.docker.com/r/dellemc/csi-vxflexos/tags) | v2.2 | v2.2 | v2.1 | v2.0 | +| [CSI Driver for PowerMax](https://hub.docker.com/r/dellemc/csi-powermax/tags) | v2.2 | v2.2 | v2.1 | v2.0 | ## CSM Modules Support Matrix for Dell CSI Drivers | CSM Module | CSI PowerFlex v2.2 | CSI PowerScale v2.2 | CSI PowerStore v2.2 | CSI PowerMax v2.2 | CSI Unity XT v2.2 | | ----------------- | -------------- | --------------- | --------------- | ------------- | --------------- | | Authorization v1.2| ✔️ | ✔️ | ❌ | ✔️ | ❌ | -| Observability v1.1| ✔️ | ❌ | ✔️ | ❌ | ❌ | +| Observability v1.1.1 | ✔️ | ❌ | ✔️ | ❌ | ❌ | | Replication v1.2| ❌ | ✔️ | ✔️ | ✔️ | ❌ | | Resilency v1.1| ✔️ | ❌ | ❌ | ❌ | ✔️ | \ No newline at end of file diff --git a/content/v2/csidriver/features/powermax.md b/content/v2/csidriver/features/powermax.md index 55a57131c9..a635b79ec6 100644 --- a/content/v2/csidriver/features/powermax.md +++ b/content/v2/csidriver/features/powermax.md @@ -78,6 +78,8 @@ spec: ### Creating PVCs with PVCs as source +This is not supported for replicated volumes. + This is a sample manifest for creating a PVC with another PVC as a source: ```yaml apiVersion: v1 @@ -158,6 +160,8 @@ To install multiple CSI drivers, follow these steps: Starting in v1.4, the CSI PowerMax driver supports the expansion of Persistent Volumes (PVs). This expansion is done online, which is when the PVC is attached to any node. +>Note: This feature is not supported for replicated volumes. + To use this feature, enable in `values.yaml` ```yaml diff --git a/content/v2/csidriver/installation/offline/_index.md b/content/v2/csidriver/installation/offline/_index.md index 59a7c082f3..07b0000bdb 100644 --- a/content/v2/csidriver/installation/offline/_index.md +++ b/content/v2/csidriver/installation/offline/_index.md @@ -65,10 +65,10 @@ The resulting offline bundle file can be copied to another machine, if necessary For example, here is the output of a request to build an offline bundle for the Dell CSI Operator: ``` -git clone https://github.com/dell/dell-csi-operator.git +git clone -b v1.7.0 https://github.com/dell/dell-csi-operator.git ``` ``` -cd dell-csi-operator +cd dell-csi-operator/scripts ``` ``` [root@user scripts]# ./csi-offline-bundle.sh -c diff --git a/content/v2/csidriver/installation/operator/_index.md b/content/v2/csidriver/installation/operator/_index.md index 71140cd643..be62fc2dec 100644 --- a/content/v2/csidriver/installation/operator/_index.md +++ b/content/v2/csidriver/installation/operator/_index.md @@ -97,10 +97,9 @@ $ kubectl create configmap dell-csi-operator-config --from-file config.tar.gz -n #### Steps >**Skip step 1 for "offline bundle installation" and continue using the workspace created by untar of dell-csi-operator-bundle.tar.gz.** -1. Clone the [Dell CSI Operator repository](https://github.com/dell/dell-csi-operator). +1. Clone and checkout the required dell-csi-operator version using `git clone -b v1.7.0 https://github.com/dell/dell-csi-operator.git`. 2. cd dell-csi-operator -3. git checkout dell-csi-operator-`your-version' -4. Run `bash scripts/install.sh` to install the operator. +3. Run `bash scripts/install.sh` to install the operator. >NOTE: Dell CSI Operator version 1.4.0 and higher would install to the 'dell-csi-operator' namespace by default. Any existing installations of Dell CSI Operator (v1.2.0 or later) installed using `install.sh` to the 'default' or 'dell-csi-operator' namespace can be upgraded to the new version by running `install.sh --upgrade`. diff --git a/content/v2/csidriver/installation/test/powermax.md b/content/v2/csidriver/installation/test/powermax.md index 01b87aca59..f1350305ce 100644 --- a/content/v2/csidriver/installation/test/powermax.md +++ b/content/v2/csidriver/installation/test/powermax.md @@ -40,6 +40,7 @@ This script does the following: - After that, it uses that PVC as the data source to create a new PVC and mounts it on the same container. It checks if the file that existed in the source PVC also exists in the new PVC, calculates its checksum, and compares it to the checksum previously calculated. - Finally, it cleans up all the resources that are created as part of the test. +> This is not supported for replicated volumes. #### Snapshot test @@ -71,6 +72,8 @@ Use this procedure to perform a volume expansion test. - After that, it calculates the checksum of the written data, expands the PVC, and then recalculates the checksum - Cleans up all the resources that were created as part of the test +>Note: This is not applicable for replicated volumes. + ### Setting Application Prefix Application prefix is the name of the application that can be used to group the PowerMax volumes. We can use it while naming storage group. To set the application prefix for PowerMax, please refer to the sample storage class https://github.com/dell/csi-powermax/blob/main/samples/storageclass/powermax.yaml. diff --git a/content/v2/csidriver/release/powermax.md b/content/v2/csidriver/release/powermax.md index 52c67cf950..5739dd04ee 100644 --- a/content/v2/csidriver/release/powermax.md +++ b/content/v2/csidriver/release/powermax.md @@ -25,3 +25,4 @@ There are no fixed issues in this release. ### Note: - Support for Kubernetes alpha features like Volume Health Monitoring and RWOP (ReadWriteOncePod) access mode introduced in the release will not be available in Openshift environment as Openshift doesn't support enabling of alpha features for Production Grade clusters. +- Expansion of volumes and cloning of volumes are not supported for replicated volumes. diff --git a/content/v2/csidriver/upgradation/drivers/operator.md b/content/v2/csidriver/upgradation/drivers/operator.md index 0cfbc9355e..d3f9b22a5b 100644 --- a/content/v2/csidriver/upgradation/drivers/operator.md +++ b/content/v2/csidriver/upgradation/drivers/operator.md @@ -13,10 +13,9 @@ Dell CSI Operator can be upgraded based on the supported platforms in one of the ### Using Installation Script -1. Clone the [Dell CSI Operator repository](https://github.com/dell/dell-csi-operator). +1. Clone and checkout the required dell-csi-operator version using `git clone -b v1.7.0 https://github.com/dell/dell-csi-operator.git`. 2. cd dell-csi-operator -3. git checkout dell-csi-operator-'your-version' -4. Execute `bash scripts/install.sh --upgrade` . This command will install the latest version of the operator. +3. Execute `bash scripts/install.sh --upgrade` . This command will install the latest version of the operator. >Note: Dell CSI Operator version 1.4.0 and higher would install to the 'dell-csi-operator' namespace by default. ### Using OLM diff --git a/content/v2/deployment/csmoperator/drivers/powerscale.md b/content/v2/deployment/csmoperator/drivers/powerscale.md index 951ece9dd0..4471f1d1e6 100644 --- a/content/v2/deployment/csmoperator/drivers/powerscale.md +++ b/content/v2/deployment/csmoperator/drivers/powerscale.md @@ -18,7 +18,8 @@ Note that the deployment of the driver using the operator does not use any Helm User can query for all Dell CSI drivers using the following command: `kubectl get csm --all-namespaces` -### Install Driver + +### Prerequisite 1. Create namespace. Execute `kubectl create namespace test-isilon` to create the test-isilon namespace (if not already present). Note that the namespace can be any user-defined name, in this example, we assume that the namespace is 'test-isilon'. @@ -104,10 +105,14 @@ User can query for all Dell CSI drivers using the following command: ``` Execute command: ```kubectl create -f empty-secret.yaml``` -4. Create a CR (Custom Resource) for PowerScale using the sample files provided +### Install Driver + +1. Follow all the [prerequisites](#prerequisite) above + +2. Create a CR (Custom Resource) for PowerScale using the sample files provided [here](https://github.com/dell/csm-operator/tree/master/samples). This file can be modified to use custom parameters if needed. -5. Users should configure the parameters in CR. The following table lists the primary configurable parameters of the PowerScale driver and their default values: +3. Users should configure the parameters in CR. The following table lists the primary configurable parameters of the PowerScale driver and their default values: | Parameter | Description | Required | Default | | --------- | ----------- | -------- |-------- | @@ -128,11 +133,11 @@ User can query for all Dell CSI drivers using the following command: | X_CSI_MAX_VOLUMES_PER_NODE | Specify the default value for the maximum number of volumes that the controller can publish to the node | Yes | 0 | | X_CSI_MODE | Driver starting mode | No | node | -6. Execute the following command to create PowerScale custom resource: +4. Execute the following command to create PowerScale custom resource: ```kubectl create -f ``` . This command will deploy the CSI-PowerScale driver in the namespace specified in the input YAML file. -7. [Verify the CSI Driver installation](../../#verifying-the-driver-installation) +5. [Verify the CSI Driver installation](../../#verifying-the-driver-installation) **Note** : 1. "Kubelet config dir path" is not yet configurable in case of Operator based driver installation. diff --git a/content/v2/deployment/csmoperator/modules/_index.md b/content/v2/deployment/csmoperator/modules/_index.md index 4b79544a51..4a76e7d868 100644 --- a/content/v2/deployment/csmoperator/modules/_index.md +++ b/content/v2/deployment/csmoperator/modules/_index.md @@ -3,4 +3,11 @@ title: "CSM Modules" linkTitle: "CSM Modules" description: Installation of Dell CSM Modules using Dell CSM Operator weight: 2 ---- \ No newline at end of file +--- + +The CSM Operator can optionally enable modules that are supported by the specific Dell CSI driver. By default, the modules are disabled but they can be enabled by setting any pre-requisite configuration options for the given module and setting the enabled flag to true in the custom resource. +The steps include: + +1. Deploy the Dell CSM Operator (if it is not already deployed). Please follow the instructions available [here](../../#installation). +2. Configure any pre-requisite for the desired module(s). See the specific module below for more information +3. Follow the instructions available [here](../drivers/powerscale.md/#install-driver)) to install the Dell CSI Driver via the CSM Operator. The module section in the ContainerStorageModule CR should be updated to enable the desired module(s). There are [sample manifests](https://github.com/dell/csm-operator/tree/main/samples) provided which can be edited to do an easy installation of the driver along with the module. \ No newline at end of file diff --git a/content/v2/deployment/csmoperator/modules/authorization.md b/content/v2/deployment/csmoperator/modules/authorization.md index 3e9307bab8..4d1e2ca19b 100644 --- a/content/v2/deployment/csmoperator/modules/authorization.md +++ b/content/v2/deployment/csmoperator/modules/authorization.md @@ -2,19 +2,11 @@ title: Authorization linkTitle: "Authorization" description: > - Installing Authorization via Dell CSM Operator + Pre-requisite for Installing Authorization via Dell CSM Operator --- -## Installing Authorization via Dell CSM Operator +The CSM Authorization module for supported Dell CSI Drivers can be installed via the Dell CSM Operator. Please note, Dell CSM operator currently ONLY supports deploying CSM Authorization sidecar/container. -The Authorization module for supported Dell CSI Drivers can be installed via the Dell CSM Operator. +## Pre-requisite -To deploy the Dell CSM Operator, follow the instructions available [here](../../#installation). - -There are [sample manifests](https://github.com/dell/csm-operator/blob/main/samples/storage_csm_powerscale.yaml) provided which can be edited to do an easy installation of the driver along with the module. - -### Install Authorization - -1. Create the required Secrets as documented in the [Helm chart procedure](../../../../authorization/deployment/#configuring-a-dell-csi-driver). - -2. Follow the instructions available [here](../../drivers/powerscale/#install-driver) to install the Dell CSI Driver via the CSM Operator. The module section in the ContainerStorageModule CR should be updated to enable Authorization. \ No newline at end of file +Follow the instructions available in CSM Authorization for [Configuring a Dell CSI Driver with CSM for Authorization](../../../authorization/deployment/_index.md/#configuring-a-dell-csi-driver). \ No newline at end of file diff --git a/content/v2/deployment/csmoperator/modules/replication.md b/content/v2/deployment/csmoperator/modules/replication.md new file mode 100644 index 0000000000..cba958854a --- /dev/null +++ b/content/v2/deployment/csmoperator/modules/replication.md @@ -0,0 +1,27 @@ +--- +title: Replication +linkTitle: "Replication" +description: > + Pre-requisite for Installing Replication via Dell CSM Operator +--- + +The CSM Replication module for supported Dell CSI Drivers can be installed via the Dell CSM Operator. Dell CSM Operator will deploy CSM Replication sidecar and the complemental CSM Replication controller manager. + +## Prerequisite + +To use Replication, you need at least two clusters: + +- a source cluster which is the main cluster +- one or more target clusters which will serve as diaster recovery clusters for the main cluster + +To configure all the clusters, follow the steps below: + +1. On your main cluster, follow the instructions available in CSM Replication for [Installation using repctl](../../../replication/deployment/install-repctl.md). NOTE: On step 4 of the link above, you MUST use the command below to automatically package all clusters' `.kube` config as a secret: + +```shell + ./repctl cluster inject +``` + +CSM Operator needs this admin configs instead of the service accounts’ configs to be able to properly manage the target clusters. The default service account that'll be used is the CSM Operator service account. + +2. On each of the target clusters, configure the prerequisites for deploying the driver via Dell CSM Operator. For example, PowerScale has the following [prerequisites for deploying PowerScale via Dell CSM Operator](../drivers/powerscale.md/#prerequisite) \ No newline at end of file diff --git a/content/v2/observability/deployment/_index.md b/content/v2/observability/deployment/_index.md index 582e8d90c0..9a5d6f2566 100644 --- a/content/v2/observability/deployment/_index.md +++ b/content/v2/observability/deployment/_index.md @@ -30,7 +30,7 @@ The Prometheus service should be running on the same Kubernetes cluster as the C | Supported Version | Image | Helm Chart | | ----------------- | ----------------------- | ------------------------------------------------------------ | -| 2.22.0 | prom/prometheus:v2.22.0 | [Prometheus Helm chart](https://github.com/prometheus-community/helm-charts/tree/main/charts/prometheus) | +| 2.23.0 | prom/prometheus:v2.23.0 | [Prometheus Helm chart](https://github.com/prometheus-community/helm-charts/tree/main/charts/prometheus) | **Note**: It is the user's responsibility to provide persistent storage for Prometheus if they want to preserve historical data. @@ -65,13 +65,13 @@ Here is a sample minimal configuration for Prometheus. Please note that the conf type: NodePort servicePort: 9090 extraScrapeConfigs: | - - job_name: 'karavi-metrics-powerflex' - scrape_interval: 5s - scheme: https - static_configs: - - targets: ['otel-collector:8443'] - tls_config: - insecure_skip_verify: true + - job_name: 'karavi-metrics-[CSI-DRIVER]' + scrape_interval: 5s + scheme: https + static_configs: + - targets: ['otel-collector:8443'] + tls_config: + insecure_skip_verify: true ``` 2. If using Rancher, create a ServiceMonitor. @@ -227,7 +227,7 @@ Below are the steps to deploy a new Grafana instance into your Kubernetes cluste - name: Prometheus type: prometheus access: proxy - url: 'http://prometheus:9090' + url: 'http://prometheus-server:9090' isDefault: null version: 1 editable: true diff --git a/content/v2/replication/_index.md b/content/v2/replication/_index.md index fe7de3d6dd..cae6e7d45d 100644 --- a/content/v2/replication/_index.md +++ b/content/v2/replication/_index.md @@ -16,32 +16,32 @@ applications in case of both planned and unplanned migration. CSM for Replication provides the following capabilities: {{}} -| Capability | PowerScale | Unity | PowerStore | PowerFlex | PowerMax | -| - | :-: | :-: | :-: | :-: | :-: | -| Replicate data using native storage array based replication | yes | no | yes | no | yes | -| Create `PersistentVolume` objects in the cluster representing the replicated volume | yes | no | yes | no | yes | -| Create `DellCSIReplicationGroup` objects in the cluster | yes | no | yes | no | yes | -| Failover & Reprotect applications using the replicated volumes | yes | no | yes | no | yes | -| Provides a command line utility - [repctl](tools) for configuring & managing replication related resources across multiple clusters | yes | no | yes | no | yes | +| Capability | PowerMax | PowerStore | PowerScale | PowerFlex | Unity | +| ----------------------------------------------------------------------------------- | :------: | :--------: | :--------: | :-------: | :---: | +| Replicate data using native storage array based replication | yes | yes | yes | no | no | +| Create `PersistentVolume` objects in the cluster representing the replicated volume | yes | yes | yes | no | no | +| Create `DellCSIReplicationGroup` objects in the cluster | yes | yes | yes | no | no | +| Failover & Reprotect applications using the replicated volumes | yes | yes | yes | no | no | +| Provides a command line utility - [repctl](tools) for configuring & managing replication related resources across multiple clusters | yes | yes | yes | no | no | {{
}} ## Supported Operating Systems/Container Orchestrator Platforms {{}} -| COP/OS | PowerMax | PowerStore | PowerScale | -|-|-|-|-| -| Kubernetes | 1.21, 1.22, 1.23 | 1.21, 1.22, 1.23 | 1.21, 1.22, 1.23| -| Red Hat OpenShift | 4.8, 4.9 | 4.8, 4.9 | 4.8, 4.9 | -| RHEL | 7.x, 8.x | 7.x, 8.x | 7.x, 8.x | -| CentOS | 7.8, 7.9 | 7.8, 7.9 | 7.8, 7.9 | -| Ubuntu | 20.04 | 20.04 | 20.04 | -| SLES | 15SP2 | 15SP2 | 15SP2 | +| COP/OS | PowerMax | PowerStore | PowerScale | +|---------------|------------------|------------------|------------| +| Kubernetes | 1.21, 1.22, 1.23 | 1.21, 1.22, 1.23 | 1.21, 1.22, 1.23 | +| Red Hat OpenShift | 4.8, 4.9 | 4.8, 4.9 | 4.8, 4.9 | +| RHEL | 7.x, 8.x | 7.x, 8.x | 7.x, 8.x | +| CentOS | 7.8, 7.9 | 7.8, 7.9 | 7.8, 7.9 | +| Ubuntu | 20.04 | 20.04 | 20.04 | +| SLES | 15SP2 | 15SP2 | 15SP2 | {{
}} ## Supported Storage Platforms {{}} -| | PowerMax | PowerStore | PowerScale | +| | PowerMax | PowerStore | PowerScale | |---------------|:-------------------:|:----------------:|:----------------:| | Storage Array | 5978.479.479, 5978.711.711, Unisphere 9.2 | 1.0.x, 2.0.x, 2.1.x | OneFS 8.1, 8.2, 9.0, 9.1, 9.2, 9.3 | {{
}} @@ -50,11 +50,11 @@ CSM for Replication provides the following capabilities: CSM for Replication supports the following CSI drivers and versions. {{}} -| Storage Array | CSI Driver | Supported Versions | -| ------------- | ---------- | ------------------ | -| CSI Driver for Dell PowerMax | [csi-powermax](https://github.com/dell/csi-powermax) | v2.0, v2.1, v2.2 | -| CSI Driver for Dell PowerStore | [csi-powerstore](https://github.com/dell/csi-powerstore) | v2.0, v2.1, v2.2 | -| CSI Driver for Dell PowerScale | [csi-powerscale](https://github.com/dell/csi-powerscale) | v2.2 | +| Storage Array | CSI Driver | Supported Versions | +| ------------------------------ | -------------------------------------------------------- | ------------------ | +| CSI Driver for Dell PowerMax | [csi-powermax](https://github.com/dell/csi-powermax) | v2.0, v2.1, v2.2 | +| CSI Driver for Dell PowerStore | [csi-powerstore](https://github.com/dell/csi-powerstore) | v2.0, v2.1, v2.2 | +| CSI Driver for Dell PowerScale | [csi-powerscale](https://github.com/dell/csi-powerscale) | v2.2 | {{
}} ## Details @@ -80,27 +80,23 @@ the objects still exist in pairs. CSM for Replication provides the following capabilities: {{}} -| Capability | PowerMax | PowerStore | PowerScale | PowerFlex | Unity | -| ---------| -------- | -------- | -------- | -------- | -------- | -| Asynchronous replication of PVs accross K8s clusters | yes | yes | yes | no | no | -| Synchronous replication of PVs accross K8s clusters | yes | no | no | no | no | -| Single cluster (stretched) mode replication | yes | yes | yes | no | no | -| Replication actions (failover, reprotect) | yes | yes | yes | no | no | +| Capability | PowerMax | PowerStore | PowerScale | PowerFlex | Unity | +| ----------------------------------------------------------------| -------- | ---------- | ---------- | --------- | ----- | +| Asynchronous replication of PVs accross or single K8s clusters | yes | yes (block)| yes | no | no | +| Synchronous replication of PVs accross or single K8s clusters | yes | no | no | no | no | +| Metro replication single (stretched) cluster | yes | no | no | no | no | +| Replication actions (failover, reprotect) | yes | yes | yes | no | no | {{
}} ### Supported Platforms The following matrix provides a list of all supported versions for each Dell Storage product. -| Platforms | PowerMax | PowerStore | PowerScale | -| -------- | --------- | ---------- | ---------- | +| Platforms | PowerMax | PowerStore | PowerScale | +| ---------- | ----------------- | ---------------- | ---------------- | | Kubernetes | 1.21, 1.22, 1.23 | 1.21, 1.22, 1.23 | 1.21, 1.22, 1.23 | -| CSI Driver | 2.x | 2.x | 2.2+ | - -| Platforms | PowerMax | PowerStore | PowerScale | -| -------- | --------- | ---------- | ---------- | -| RedHat Openshift |4.8, 4.9 | 4.8, 4.9 | 4.8, 4.9 | -| CSI Driver | 2.2+ | 2.x | 2.2+ | +| RedHat Openshift |4.8, 4.9 | 4.8, 4.9 | 4.8, 4.9 | +| CSI Driver | 2.x | 2.x | 2.2+ | For compatibility with storage arrays please refer to corresponding [CSI drivers](../csidriver/#features-and-capabilities) diff --git a/content/v3/FAQ/_index.md b/content/v3/FAQ/_index.md index b7584f0534..39ffd7d493 100644 --- a/content/v3/FAQ/_index.md +++ b/content/v3/FAQ/_index.md @@ -1,21 +1,19 @@ --- title: "CSM FAQ" linktitle: "FAQ" -description: Frequently asked questions of Dell EMC Container Storage Modules +description: Frequently asked questions of Dell Technologies (Dell) Container Storage Modules weight: 2 --- - [What are Dell Container Storage Modules (CSM)? How different is it from a CSI driver?](#what-are-dell-container-storage-modules-csm-how-different-is-it-from-a-csi-driver) - [Where do I start with Dell Container Storage Modules (CSM)?](#where-do-i-start-with-dell-container-storage-modules-csm) -- [Is the Container Storage Module XYZ available for my array?](#is-the-container-storage-module-xyz-available-for-my-array) - [What are the prerequisites for deploying Container Storage Modules?](#what-are-the-prerequisites-for-deploying-container-storage-modules) -- [How do I uninstall or disable a Container Storage Module?](#how-do-i-uninstall-or-a-disable-a-module) +- [How do I uninstall or disable a module?](#how-do-i-uninstall-or-disable-a-module) - [How do I troubleshoot Container Storage Modules?](#how-do-i-troubleshoot-container-storage-modules) - [Can I use the CSM functionality like Prometheus collection or Authorization quotas for my non-Kubernetes storage clients?](#can-i-use-the-csm-functionality-like-prometheus-collection-or-authorization-quotas-for-my-non-kubernetes-storage-clients) - [Should I install the module in the same namespace as the driver or another?](#should-i-install-the-module-in-the-same-namespace-as-the-driver-or-another) - [Which Kubernetes distributions are supported?](#which-kubernetes-distributions-are-supported) - [How do I get a list of Container Storage Modules deployed in my cluster with their versions?](#how-do-i-get-a-list-of-container-storage-modules-deployed-in-my-cluster-with-their-versions) -- [Does the CSM Installer provide full Container Storage Modules functionality for all products?](#does-the-csm-installer-provide-full-container-storage-modules-functionality-for-all-products) - [Do all Container Storage Modules need to be the same version, or can I mix and match?](#do-all-container-storage-modules-need-to-be-the-same-version-or-can-i-mix-and-match) - [Can I run Container Storage Modules in a production environment?](#can-i-run-container-storage-modules-in-a-production-environment) - [Is Dell Container Storage Modules (CSM) supported by Dell Technologies?](#is-dell-container-storage-modules-csm-supported-by-dell-technologies) @@ -30,53 +28,42 @@ The main goal with CSM modules is to expose storage array enterprise features di ### Where do I start with Dell Container Storage Modules (CSM)? The umbrella repository for every Dell Container Storage Module is: [https://github.com/dell/csm](https://github.com/dell/csm). -### Is the Container Storage Module XYZ available for my array? -Please see module and the respectice CSI driver version available for each array: - -| CSM Module | CSI PowerFlex v2.1 | CSI PowerScale v2.1 | CSI PowerStore v2.1 | CSI PowerMax v2.1 | CSI Unity XT v2.1 | -| ----------------- | -------------- | --------------- | --------------- | ------------- | --------------- | -| Authorization v1.1| ✔️ | ✔️ | ❌ | ✔️ | ❌ | -| Observability v1.0| ✔️ | ❌ | ✔️ | ❌ | ❌ | -| Replication v1.1| ❌ | ❌ | ✔️ | ✔️ | ❌ | -| Resilency v1.0| ✔️ | ❌ | ❌ | ❌ | ✔️ | -| CSM Installer v1.0| ✔️ | ✔️ | ✔️ | ✔️ | ✔️ | - ### What are the prerequisites for deploying Container Storage Modules? Prerequisites can be found on the respective module deployment pages: -- [Dell EMC Container Storage Module for Observability Deployment](../observability/deployment/#prerequisites) -- [Dell EMC Container Storage Module for Authorization Deployment](../authorization/deployment/#prerequisites) -- [Dell EMC Container Storage Module for Resiliency Deployment](../resiliency/deployment/) -- [Dell EMC Container Storage Module for Replication Deployment](../replication/deployment/installation/#before-you-begin) +- [Dell Container Storage Module for Observability Deployment](../observability/deployment/#prerequisites) +- [Dell Container Storage Module for Authorization Deployment](../authorization/deployment/#prerequisites) +- [Dell Container Storage Module for Resiliency Deployment](../resiliency/deployment/) +- [Dell Container Storage Module for Replication Deployment](../replication/deployment/installation/#before-you-begin) -Prerequisites for deploying the Dell EMC CSI drivers can be found here: -- [Dell EMC CSI Drivers Deployment](../csidriver/installation/) +Prerequisites for deploying the Dell CSI drivers can be found here: +- [Dell CSI Drivers Deployment](../csidriver/installation/) -### How do I uninstall or a disable a module? -- [Dell EMC Container Storage Module for Authorization](../authorization/uninstallation/) -- [Dell EMC Container Storage Module for Observability](../observability/uninstall/) -- [Dell EMC Container Storage Module for Resiliency](../resiliency/uninstallation/) +### How do I uninstall or disable a module? +- [Dell Container Storage Module for Authorization](../authorization/uninstallation/) +- [Dell Container Storage Module for Observability](../observability/uninstall/) +- [Dell Container Storage Module for Resiliency](../resiliency/uninstallation/) ### How do I troubleshoot Container Storage Modules? -- [Dell EMC CSI Drivers](../csidriver/troubleshooting/) -- [Dell EMC Container Storage Module for Authorization](../authorization/troubleshooting/) -- [Dell EMC Container Storage Module for Observability](../observability/troubleshooting/) -- [Dell EMC Container Storage Module for Replication](../replication/troubleshooting/) -- [Dell EMC Container Storage Module for Resiliency](../resiliency/troubleshooting/) +- [Dell CSI Drivers](../csidriver/troubleshooting/) +- [Dell Container Storage Module for Authorization](../authorization/troubleshooting/) +- [Dell Container Storage Module for Observability](../observability/troubleshooting/) +- [Dell Container Storage Module for Replication](../replication/troubleshooting/) +- [Dell Container Storage Module for Resiliency](../resiliency/troubleshooting/) ### Can I use the CSM functionality like Prometheus collection or Authorization quotas for my non-Kubernetes storage clients? -No, all the modules have been designed to work inside Kubernetes with Dell EMC CSI drivers. +No, all the modules have been designed to work inside Kubernetes with Dell CSI drivers. ### Should I install the module in the same namespace as the driver or another? -It is recommended to install CSM for Observability in a namespace separate from the Dell EMC CSI drivers because it works across multiple drivers. All other modules either run as standalone or are injected into the Dell EMC CSI driver as a sidecar. +It is recommended to install CSM for Observability in a namespace separate from the Dell CSI drivers because it works across multiple drivers. All other modules either run as standalone or with the Dell CSI driver as a sidecar. ### Which Kubernetes distributions are supported? The supported Kubernetes distributions for Container Storage Modules are documented: -- [Dell EMC Container Storage Module for Authorization](../authorization/#supported-operating-systemscontainer-orchestrator-platforms) -- [Dell EMC Container Storage Module for Observability](../observability/#supported-operating-systemscontainer-orchestrator-platforms) -- [Dell EMC Container Storage Module for Replication](../replication/#supported-operating-systemscontainer-orchestrator-platforms) -- [Dell EMC Container Storage Module for Resiliency](../resiliency/#supported-operating-systemscontainer-orchestrator-platforms) +- [Dell Container Storage Module for Authorization](../authorization/#supported-operating-systemscontainer-orchestrator-platforms) +- [Dell Container Storage Module for Observability](../observability/#supported-operating-systemscontainer-orchestrator-platforms) +- [Dell Container Storage Module for Replication](../replication/#supported-operating-systemscontainer-orchestrator-platforms) +- [Dell Container Storage Module for Resiliency](../resiliency/#supported-operating-systemscontainer-orchestrator-platforms) -The supported distros for the Dell EMC CSI Drivers are located [here](../csidriver/#supported-operating-systemscontainer-orchestrator-platforms). +The supported distros for the Dell CSI Drivers are located [here](../csidriver/#supported-operating-systemscontainer-orchestrator-platforms). ### How do I get a list of Container Storage Modules deployed in my cluster with their versions? The easiest way to find the module version is to check the image tag for the module. For all the namespaces you can execute the following: @@ -88,18 +75,13 @@ Or if you know the namespace: kubectl get deployment,daemonset -o wide -n {{namespace}} ``` -### Does the CSM Installer provide full Container Storage Modules functionality for all products? -The CSM Installer supports the installation of all the Container Storage Modules and Dell EMC CSI drivers. - ### Do all Container Storage Modules need to be the same version, or can I mix and match? It is advised to comply with the support matrices (links below) and not deviate from it with mixed versions. -- [Dell EMC Container Storage Module for Authorization](../authorization/#supported-operating-systemscontainer-orchestrator-platforms) -- [Dell EMC Container Storage Module for Observability](../observability/#supported-operating-systemscontainer-orchestrator-platforms) -- [Dell EMC Container Storage Module for Replication](../replication/#supported-operating-systemscontainer-orchestrator-platforms) -- [Dell EMC Container Storage Module for Resiliency](../resiliency/#supported-operating-systemscontainer-orchestrator-platforms) -- [Dell EMC CSI Drivers](../csidriver/#supported-operating-systemscontainer-orchestrator-platforms). - -The CSM installer module will help to stay aligned with compatible versions during the first install and future upgrades. +- [Dell Container Storage Module for Authorization](../authorization/#supported-operating-systemscontainer-orchestrator-platforms) +- [Dell Container Storage Module for Observability](../observability/#supported-operating-systemscontainer-orchestrator-platforms) +- [Dell Container Storage Module for Replication](../replication/#supported-operating-systemscontainer-orchestrator-platforms) +- [Dell Container Storage Module for Resiliency](../resiliency/#supported-operating-systemscontainer-orchestrator-platforms) +- [Dell CSI Drivers](../csidriver/#supported-operating-systemscontainer-orchestrator-platforms). ### Can I run Container Storage Modules in a production environment? As of CSM 1.0, the Container Storage Modules are GA and ready for production systems. @@ -115,4 +97,4 @@ Yes! All Container Storage Modules are released as open-source projects under Apache-2.0 License. You are free to contribute directly following the [contribution guidelines](https://github.com/dell/csm/blob/main/docs/CONTRIBUTING.md), fork the projects, modify them, and of course share feedback or open tickets ;-) ### What is coming next? -This is just the beginning of the journey for Dell Container Storage Modules, and there is a full roadmap with more to come, which you can check under the [GithHub Milestones](https://github.com/dell/csm/milestones) page. +This is just the beginning of the journey for Dell Container Storage Modules, and there is a full roadmap with more to come, which you can check under the [GitHub Milestones](https://github.com/dell/csm/milestones) page. diff --git a/content/v3/_index.md b/content/v3/_index.md index 18b7ddfaaa..68f876afee 100644 --- a/content/v3/_index.md +++ b/content/v3/_index.md @@ -7,7 +7,7 @@ linkTitle: "Documentation" This document version is no longer actively maintained. The site that you are currently viewing is an archived snapshot. For up-to-date documentation, see the [latest version](/csm-docs/) {{% /pageinfo %}} -The Dell Container Storage Modules (CSM) enables simple and consistent integration and automation experiences, extending enterprise storage capabilities to Kubernetes for cloud-native stateful applications. It reduces management complexity so developers can independently consume enterprise storage with ease and automate daily operations such as provisioning, snapshotting, replication, observability, authorization and, resiliency. +The Dell Technologies (Dell) Container Storage Modules (CSM) enables simple and consistent integration and automation experiences, extending enterprise storage capabilities to Kubernetes for cloud-native stateful applications. It reduces management complexity so developers can independently consume enterprise storage with ease and automate daily operations such as provisioning, snapshotting, replication, observability, authorization and, resiliency. CSM Hex Diagram @@ -15,16 +15,25 @@ CSM is made up of multiple components including modules (enterprise capabilities CSM Diagram -## CSM Supported Modules and Dell EMC CSI Drivers +## CSM Supported Modules and Dell CSI Drivers -| Modules/Drivers | CSM 1.1 | [CSM 1.0](../v1/) | [Previous](../v2/) | [Older](../v3) | +| Modules/Drivers | CSM 1.2 | [CSM 1.1](../v1/) | [CSM 1.0.1](../v1/) | [CSM 1.0](../v2/) | | - | :-: | :-: | :-: | :-: | -| Authorization | 1.1 | 1.0 | - | - | -| Observability | 1.0 | 1.0 | - | - | -| Replication | 1.1 | 1.0 | - | - | -| Resiliency | 1.0 | 1.0 | - | - | -| CSI Driver for PowerScale | v2.1 | v2.0 | v1.6 | v1.5 | -| CSI Driver for Unity | v2.1 | v2.0 | v1.6 | v1.5 | -| CSI Driver for PowerStore | v2.1 | v2.0 | v1.4 | v1.3 | -| CSI Driver for PowerFlex | v2.1 | v2.0 | v1.5 | v1.4 | -| CSI Driver for PowerMax | v2.1 | v2.0 | v1.7 | v1.6 | +| [Authorization](https://hub.docker.com/r/dellemc/csm-authorization-sidecar) | 1.2 | 1.1 | 1.0 | 1.0 | +| [Observability](https://hub.docker.com/r/dellemc/csm-topology) | 1.1 | 1.0.1 | 1.0.1 | 1.0 | +| [Replication](https://hub.docker.com/r/dellemc/dell-csi-replicator) | 1.2 | 1.1 | 1.0 | 1.0 | +| [Resiliency](https://hub.docker.com/r/dellemc/podmon) | 1.1 | 1.0.1 | 1.0.1 | 1.0 | +| [CSI Driver for PowerScale](https://hub.docker.com/r/dellemc/csi-isilon/tags) | v2.2 | v2.1 | v2.0 | v2.0 | +| [CSI Driver for Unity](https://hub.docker.com/r/dellemc/csi-unity/tags) | v2.2 | v2.1 | v2.0 | v2.0 | +| [CSI Driver for PowerStore](https://hub.docker.com/r/dellemc/csi-powerstore/tags) | v2.2 | v2.1 | v2.0 | v2.0 | +| [CSI Driver for PowerFlex](https://hub.docker.com/r/dellemc/csi-vxflexos/tags) | v2.2 | v2.1 | v2.0 | v2.0 | +| [CSI Driver for PowerMax](https://hub.docker.com/r/dellemc/csi-powermax/tags) | v2.2 | v2.1 | v2.0 | v2.0 | + +## CSM Modules Support Matrix for Dell CSI Drivers + +| CSM Module | CSI PowerFlex v2.2 | CSI PowerScale v2.2 | CSI PowerStore v2.2 | CSI PowerMax v2.2 | CSI Unity XT v2.2 | +| ----------------- | -------------- | --------------- | --------------- | ------------- | --------------- | +| Authorization v1.2| ✔️ | ✔️ | ❌ | ✔️ | ❌ | +| Observability v1.1| ✔️ | ❌ | ✔️ | ❌ | ❌ | +| Replication v1.2| ❌ | ✔️ | ✔️ | ✔️ | ❌ | +| Resilency v1.1| ✔️ | ❌ | ❌ | ❌ | ✔️ | \ No newline at end of file diff --git a/content/v3/authorization/_index.md b/content/v3/authorization/_index.md index 329e6065a1..0310e936d6 100644 --- a/content/v3/authorization/_index.md +++ b/content/v3/authorization/_index.md @@ -3,18 +3,18 @@ title: "Authorization" linkTitle: "Authorization" weight: 4 Description: > - Dell EMC Container Storage Modules (CSM) for Authorization + Dell Technologies (Dell) Container Storage Modules (CSM) for Authorization --- -[Container Storage Modules](https://github.com/dell/csm) (CSM) for Authorization is part of the open-source suite of Kubernetes storage enablers for Dell EMC products. +[Container Storage Modules](https://github.com/dell/csm) (CSM) for Authorization is part of the open-source suite of Kubernetes storage enablers for Dell products. -CSM for Authorization provides storage and Kubernetes administrators the ability to apply RBAC for Dell EMC CSI Drivers. It does this by deploying a proxy between the CSI driver and the storage system to enforce role-based access and usage rules. +CSM for Authorization provides storage and Kubernetes administrators the ability to apply RBAC for Dell CSI Drivers. It does this by deploying a proxy between the CSI driver and the storage system to enforce role-based access and usage rules. Storage administrators of compatible storage platforms will be able to apply quota and RBAC rules that instantly and automatically restrict cluster tenants usage of storage resources. Users of storage through CSM for Authorization do not need to have storage admin root credentials to access the storage system. Kubernetes administrators will have an interface to create, delete, and manage roles/groups that storage rules may be applied. Administrators and/or users may then generate authentication tokens that may be used by tenants to use storage with proper access policies being automatically enforced. -The following diagram shows a high-level overview of CSM for Authorization with a `tenant-app` that is using a CSI driver to perform storage operations through the CSM for Authorization `proxy-server` to access the a Dell EMC storage system. All requests from the CSI driver will contain the token for the given tenant that was granted by the Storage Administrator. +The following diagram shows a high-level overview of CSM for Authorization with a `tenant-app` that is using a CSI driver to perform storage operations through the CSM for Authorization `proxy-server` to access the a Dell storage system. All requests from the CSI driver will contain the token for the given tenant that was granted by the Storage Administrator. ![CSM for Authorization](./karavi-authorization-example.png "CSM for Authorization") @@ -27,13 +27,13 @@ The following diagram shows a high-level overview of CSM for Authorization with | Ability to shield storage credentials from Kubernetes administrators ensuring credentials are only handled by storage admins | Yes | Yes | Yes | No | No | {{}} -__NOTE:__ PowerScale OneFS implements its own form of Role-Based Access Control (RBAC). CSM for Authorization does not enforce any role-based restrictions for PowerScale. To configure RBAC for PowerScale, refer to the PowerScale OneFS [documentation](https://www.dell.com/support/home/en-us/product-support/product/isilon-onefs/docs). +**NOTE:** PowerScale OneFS implements its own form of Role-Based Access Control (RBAC). CSM for Authorization does not enforce any role-based restrictions for PowerScale. To configure RBAC for PowerScale, refer to the PowerScale OneFS [documentation](https://www.dell.com/support/home/en-us/product-support/product/isilon-onefs/docs). ## Supported Operating Systems/Container Orchestrator Platforms {{}} | COP/OS | Supported Versions | |-|-| -| Kubernetes | 1.20, 1.21, 1.22 | +| Kubernetes | 1.21, 1.22, 1.23 | | Red Hat OpenShift | 4.8, 4.9| | RHEL | 7.x, 8.x | | CentOS | 7.8, 7.9 | @@ -44,7 +44,7 @@ __NOTE:__ PowerScale OneFS implements its own form of Role-Based Access Control {{
}} | | PowerMax | PowerFlex | PowerScale | |---------------|:----------------:|:-------------------:|:----------------:| -| Storage Array |5978.479.479, 5978.669.669, 5978.711.711, Unisphere 9.2| 3.5.x, 3.6.x | OneFS 8.1, 8.2, 9.0, 9.1, 9.2 | +| Storage Array |5978.479.479, 5978.711.711, Unisphere 9.2| 3.5.x, 3.6.x | OneFS 8.1, 8.2, 9.0, 9.1, 9.2, 9.3 | {{
}} ## Supported CSI Drivers @@ -53,12 +53,12 @@ CSM for Authorization supports the following CSI drivers and versions. {{}} | Storage Array | CSI Driver | Supported Versions | | ------------- | ---------- | ------------------ | -| CSI Driver for Dell EMC PowerFlex | [csi-powerflex](https://github.com/dell/csi-powerflex) | v2.0,v2.1 | -| CSI Driver for Dell EMC PowerMax | [csi-powermax](https://github.com/dell/csi-powermax) | v2.0,v2.1 | -| CSI Driver for Dell EMC PowerScale | [csi-powerscale](https://github.com/dell/csi-powerscale) | v2.0,v2.1 | +| CSI Driver for Dell PowerFlex | [csi-powerflex](https://github.com/dell/csi-powerflex) | v2.0, v2.1, v2.2 | +| CSI Driver for Dell PowerMax | [csi-powermax](https://github.com/dell/csi-powermax) | v2.0, v2.1 ,v2.2 | +| CSI Driver for Dell PowerScale | [csi-powerscale](https://github.com/dell/csi-powerscale) | v2.0, v2.1, v2.2 | {{
}} -__Note:__ If the deployed CSI driver has a number of controller pods equal to the number of schedulable nodes in your cluster, CSM for Authorization may not be able to inject properly into the driver's controller pod. +**NOTE:** If the deployed CSI driver has a number of controller pods equal to the number of schedulable nodes in your cluster, CSM for Authorization may not be able to inject properly into the driver's controller pod. To resolve this, please refer to our [troubleshooting guide](./troubleshooting) on the topic. ## Authorization Components Support Matrix @@ -68,6 +68,7 @@ CSM for Authorization consists of 2 components - the Authorization sidecar and t | Authorization Sidecar Image Tag | Authorization Proxy Server Version | | ------------------------------- | ---------------------------------- | | dellemc/csm-authorization-sidecar:v1.0.0 | v1.0.0, v1.1.0 | +| dellemc/csm-authorization-sidecar:v1.2.0 | v1.1.0, v1.2.0 | {{}} ## Roles and Responsibilities @@ -99,4 +100,4 @@ Tenants of CSM for Authorization can use the token provided by the Storage Admin 4) Tenant Admin inputs the Token into their Kubernetes cluster as a Secret. 5) Tenant Admin updates CSI driver with CSM Authorization sidecar module. -![CSM for Authorization Workflow](./design2.png "CSM for Authorization Workflow") \ No newline at end of file +![CSM for Authorization Workflow](./design2.png "CSM for Authorization Workflow") diff --git a/content/v3/authorization/cli.md b/content/v3/authorization/cli.md index eedaf0957d..f1ef1bb5aa 100644 --- a/content/v3/authorization/cli.md +++ b/content/v3/authorization/cli.md @@ -3,7 +3,7 @@ title: CLI linktitle: CLI weight: 4 description: > - Dell EMC Container Storage Modules (CSM) for Authorization CLI + Dell Technologies (Dell) Container Storage Modules (CSM) for Authorization CLI --- karavictl is a command-line interface (CLI) used to interact with and manage your Container Storage Modules (CSM) Authorization deployment. @@ -15,7 +15,6 @@ If you feel that something is unclear or missing in this document, please open u | - | - | | [karavictl](#karavictl) | karavictl is used to interact with CSM Authorization Server | | [karavictl cluster-info](#karavictl-cluster-info) | Display the state of resources within the cluster | -| [karavictl inject](#karavictl-inject) | Inject the sidecar proxy into a CSI driver pod | | [karavictl generate](#karavictl-generate) | Generate resources for use with CSM | | [karavictl generate token](#karavictl-generate-token) | Generate tokens | | [karavictl role](#karavictl-role) | Manage role | @@ -48,7 +47,7 @@ karavictl is used to interact with CSM Authorization Server ##### Synopsis -karavictl provides security, RBAC, and quota limits for accessing Dell EMC +karavictl provides security, RBAC, and quota limits for accessing Dell storage products from Kubernetes clusters ##### Options @@ -112,60 +111,6 @@ redis-commander 1/1 1 1 59m -### karavictl inject - -Inject the sidecar proxy into a CSI driver pod - -##### Synopsis - -Injects the sidecar proxy into a CSI driver pod. - -You can inject resources coming from stdin. - -``` -karavictl inject [flags] -``` - -##### Options - -``` - -h, --help help for inject - --image-addr string Help message for image-addr - --proxy-host string Help message for proxy-host -``` - -##### Options inherited from parent commands - -``` - --config string config file (default is $HOME/.karavictl.yaml) -``` - -##### Examples: - -Inject into an existing vxflexos CSI driver -``` -kubectl get secrets,deployments,daemonsets -n vxflexos -o yaml \ - | karavictl inject --image-addr [IMAGE_REPO]:5000/sidecar-proxy:latest --proxy-host [PROXY_HOST_IP] \ - | kubectl apply -f - -``` - -##### Output - -``` -$ kubectl get secrets,deployments,daemonsets -n vxflexos -o yaml \ -| karavictl inject --image-addr [IMAGE_REPO]:5000/sidecar-proxy:latest --proxy-host [PROXY_HOST_IP] \ -| kubectl apply -f - - -secret/karavi-authorization-config created -deployment.apps/vxflexos-controller configured -daemonset.apps/vxflexos-node configured -``` - - ---- - - - ### karavictl generate Generate resources for use with CSM diff --git a/content/v3/authorization/deployment/_index.md b/content/v3/authorization/deployment/_index.md index 8a4ab73dd2..ca15cb03da 100644 --- a/content/v3/authorization/deployment/_index.md +++ b/content/v3/authorization/deployment/_index.md @@ -3,12 +3,12 @@ title: Deployment linktitle: Deployment weight: 2 description: > - Dell EMC Container Storage Modules (CSM) for Authorization deployment + Dell Technologies (Dell) Container Storage Modules (CSM) for Authorization deployment --- This section outlines the deployment steps for Container Storage Modules (CSM) for Authorization. The deployment of CSM for Authorization is handled in 2 parts: - Deploying the CSM for Authorization proxy server, to be controlled by storage administrators -- Configuring one to many [supported](../../authorization#supported-csi-drivers) Dell EMC CSI drivers with CSM for Authorization +- Configuring one to many [supported](../../authorization#supported-csi-drivers) Dell CSI drivers with CSM for Authorization ## Prerequisites @@ -27,32 +27,31 @@ The CSM for Authorization proxy server is installed using a single binary instal The easiest way to obtain the single binary installer RPM is directly from the [GitHub repository's releases](https://github.com/dell/karavi-authorization/releases) section. -The single binary installer can also be built from source by cloning the [GitHub repository](https://github.com/dell/karavi-authorization) and using the following Makefile targets to build the installer: +Alternatively, the single binary installer can be built from source by cloning the [GitHub repository](https://github.com/dell/karavi-authorization) and using the following Makefile targets to build the installer: ``` make dist build-installer rpm ``` -The `build-installer` step creates a binary at `bin/deploy` and embeds all components required for installation. The `rpm` step generates an RPM package and stores it at `deploy/rpm/x86_64/`. +The `build-installer` step creates a binary at `karavi-authorization/bin/deploy` and embeds all components required for installation. The `rpm` step generates an RPM package and stores it at `karavi-authorization/deploy/rpm/x86_64/`. This allows CSM for Authorization to be installed in network-restricted environments. A Storage Administrator can execute the installer or rpm package as a root user or via `sudo`. ### Installing the RPM -1. Before installing the rpm, some network and security configuration inputs need to be provided in json format. The json file should be created in the location `$HOME/.karavi/config.json` having the following contents: +1. Before installing the rpm, some network and security configuration inputs need to be provided in json format. The json file should be created in the location `$HOME/.karavi/config.json` having the following contents: ```json { "web": { - "sidecarproxyaddr": "docker_registry/sidecar-proxy:latest", "jwtsigningsecret": "secret" }, "proxy": { "host": ":8080" }, "zipkin": { - "collectoruri": "http://DNS_host_name:9411/api/v2/spans", + "collectoruri": "http://DNS-hostname:9411/api/v2/spans", "probability": 1 }, "certificate": { @@ -60,30 +59,36 @@ A Storage Administrator can execute the installer or rpm package as a root user "crtFile": "path_to_host_cert_file", "rootCertificate": "path_to_root_CA_file" }, - "hostName": "DNS_host_name" + "hostname": "DNS-hostname" } ``` - In the above template, `DNS_host_name` refers to the hostname of the system in which the CSM for Authorization server will be installed. This hostname can be found by running the below command on the system: + In an instance where a secure deployment is not required, an insecure deployment is possible. Please note that self-signed certificates will be created for you using cert-manager to allow TLS encryption for communication on the CSM for Authorization proxy server. However, this is not recommended for production environments. For an insecure deployment, the json file in the location `$HOME/.karavi/config.json` only requires the following contents: - ``` - nslookup + ```json + { + "hostname": "DNS-hostname" + } ``` -2. In order to configure secure grpc connectivity, an additional subdomain in the format `grpc.DNS_host_name` is also required. All traffic from `grpc.DNS_host_name` needs to be routed to `DNS_host_name` address, this can be configured by adding a new DNS entry for `grpc.DNS_host_name` or providing a temporary path in the `/etc/hosts` file. +>__Note__: +> - `DNS-hostname` refers to the hostname of the system in which the CSM for Authorization server will be installed. This hostname can be found by running `nslookup ` +> - There are a number of ways to create certificates. In a production environment, certificates are usually created and managed by an IT administrator. Otherwise, certificates can be created using OpenSSL. ->__Note__: The certificate provided in `crtFile` should be valid for both the `DNS_host_name` and the `grpc.DNS_host_name` address. +2. In order to configure secure grpc connectivity, an additional subdomain in the format `grpc.DNS-hostname` is also required. All traffic from `grpc.DNS-hostname` needs to be routed to `DNS-hostname` address, this can be configured by adding a new DNS entry for `grpc.DNS-hostname` or providing a temporary path in the systems `/etc/hosts` file. - For example, create the certificate config file with alternate names (to include example.com and grpc.example.com) and then create the .crt file: +>__Note__: The certificate provided in `crtFile` should be valid for both the `DNS-hostname` and the `grpc.DNS-hostname` address. - ``` - CN = example.com - subjectAltName = @alt_names - [alt_names] - DNS.1 = grpc.example.com + For example, create the certificate config file with alternate names (to include DNS-hostname and grpc.DNS-hostname) and then create the .crt file: - openssl x509 -req -in cert_request_file.csr -CA root_CA.pem -CAkey private_key_File.key -CAcreateserial -out example.com.crt -days 365 -sha256 - ``` + ``` + CN = DNS-hostname + subjectAltName = @alt_names + [alt_names] + DNS.1 = grpc.DNS-hostname.com + + $ openssl x509 -req -in cert_request_file.csr -CA root_CA.pem -CAkey private_key_File.key -CAcreateserial -out DNS-hostname.com.crt -days 365 -sha256 + ``` 3. To install the rpm package on the system, run the below command: @@ -102,6 +107,7 @@ The storage administrator must first configure the proxy server with the followi - Bind roles to tenants Run the following commands on the Authorization proxy server: +>__Note__: The `--insecure` flag is only necessary if certificates were not provided in `$HOME/.karavi/config.json`. ```console # Specify any desired name @@ -168,6 +174,10 @@ Run the following commands on the Authorization proxy server: After creating the role bindings, the next logical step is to generate the access token. The storage admin is responsible for generating and sending the token to the Kubernetes tenant admin. +>__Note__: +> - The `--insecure` flag is only necessary if certificates were not provided in `$HOME/.karavi/config.json`. +> - This sample copies the token directly to the Kubernetes cluster master node. The requirement here is that the token must be copied and/or stored in any location accessible to the Kubernetes tenant admin. + ``` echo === Generating token === karavictl generate token --tenant $TenantName --insecure --addr "grpc.${AuthorizationProxyHost}" | jq -r '.Token' > token.yaml @@ -175,12 +185,10 @@ After creating the role bindings, the next logical step is to generate the acces echo === Copy token to Driver Host === sshpass -p $DriverHostPassword scp token.yaml ${DriverHostVMUser}@{DriverHostVMIP}:/tmp/token.yaml ``` - ->__Note__: The sample above copies the token directly to the Kubernetes cluster master node. The requirement here is that the token must be copied and/or stored in any location accessible to the Kubernetes tenant admin. ### Copy the karavictl Binary to the Kubernetes Master Node -The karavictl binary is available from the CSM for Authorization proxy server. This needs to be copied to the Kubernetes master node for Kubernetes tenant admins so the Kubernetes tenant admins can configure the Dell EMC CSI driver with CSM for Authorization. +The karavictl binary is available from the CSM for Authorization proxy server. This needs to be copied to the Kubernetes master node for Kubernetes tenant admins so the Kubernetes tenant admins can configure the Dell CSI driver with CSM for Authorization. ``` sshpass -p dangerous scp bin/karavictl root@10.247.96.174:/tmp/karavictl @@ -188,11 +196,11 @@ sshpass -p dangerous scp bin/karavictl root@10.247.96.174:/tmp/karavictl >__Note__: The storage admin is responsible for copying the binary to a location accessible by the Kubernetes tenant admin. -## Configuring a Dell EMC CSI Driver with CSM for Authorization +## Configuring a Dell CSI Driver with CSM for Authorization The second part of CSM for Authorization deployment is to configure one or more of the [supported](../../authorization#supported-csi-drivers) CSI drivers. This is controlled by the Kubernetes tenant admin. -### Configuring a Dell EMC CSI Driver +### Configuring a Dell CSI Driver Given a setup where Kubernetes, a storage system, and the CSM for Authorization Proxy Server are deployed, follow the steps below to configure the CSI Drivers to work with the Authorization sidecar: @@ -225,8 +233,7 @@ Create the karavi-authorization-config secret using the following command: >__Note__: > - Create the driver secret as you would normally except update/add the connection information for communicating with the sidecar instead of the backend storage array and scrub the username and password > - For PowerScale, the *systemID* will be the *clusterName* of the array. -> - The *isilon-creds* secret has a *mountEndpoint* parameter which should not be updated by the user. This parameter is updated and used when the driver has been injected with [CSM-Authorization](https://github.com/dell/karavi-authorization). - +> - The *isilon-creds* secret has a *mountEndpoint* parameter which must be set to the hostname or IP address of the PowerScale OneFS API server, for example, 10.0.0.1. 3. Create the proxy-server-root-certificate secret. If running in *insecure* mode, create the secret with empty data: @@ -270,7 +277,9 @@ Please refer to step 5 in the [installation steps for PowerScale](../../csidrive 1. Update *endpointPort* to match the endpoint port number set in samples/secret/karavi-authorization-config.json ->__Note__: In *my-isilon-settings.yaml*, endpointPort acts as a default value. If endpointPort is not specified in *my-isilon-settings.yaml*, then it should be specified in the *endpoint* parameter of samples/secret/secret.yaml. +*Notes:* +> - In *my-isilon-settings.yaml*, endpointPort acts as a default value. If endpointPort is not specified in *my-isilon-settings.yaml*, then it should be specified in the *endpoint* parameter of samples/secret/secret.yaml. +> - The *isilon-creds* secret has a *mountEndpoint* parameter which must be set to the hostname or IP address of the PowerScale OneFS API server, for example, 10.0.0.1. 2. Enable CSM for Authorization and provide *proxyHost* address @@ -294,7 +303,6 @@ CSM for Authorization has a subset of configuration parameters that can be updat | certificate.crtFile | String | "" |Path to the host certificate file | | certificate.keyFile | String | "" |Path to the host private key file | | certificate.rootCertificate | String | "" |Path to the root CA file | -| web.sidecarproxyaddr | String |"127.0.0.1:5000/sidecar-proxy:latest" |Docker registry address of the CSM for Authorization sidecar-proxy | | web.jwtsigningsecret | String | "secret" |The secret used to sign JWT tokens | Updating configuration parameters can be done by editing the `karavi-config-secret` on the CSM for the Authorization Server. The secret can be queried using k3s and kubectl like so: @@ -315,7 +323,7 @@ Copy the new, encoded data and edit the `karavi-config-secret` with the new data Replace the data in `config.yaml` under the `data` field with your new, encoded data. Save the changes and CSM for Authorization will read the changed secret. ->__Note__: If you are updating the signing secret, the tenants need to be updated with new tokens via the `karavictl generate token` command like so: +>__Note__: If you are updating the signing secret, the tenants need to be updated with new tokens via the `karavictl generate token` command like so. The `--insecure` flag is only necessary if certificates were not provided in `$HOME/.karavi/config.json` `karavictl generate token --tenant $TenantName --insecure --addr "grpc.${AuthorizationProxyHost}" | jq -r '.Token' > kubectl -n $namespace apply -f -` diff --git a/content/v3/authorization/design.md b/content/v3/authorization/design.md index 8d9cd34138..564ac3c4e0 100644 --- a/content/v3/authorization/design.md +++ b/content/v3/authorization/design.md @@ -3,7 +3,7 @@ title: Design linktitle: Design weight: 1 description: > - Dell EMC Container Storage Modules (CSM) for Authorization design + Dell Technologies (Dell) Container Storage Modules (CSM) for Authorization design --- Container Storage Modules (CSM) for Authorization is designed as a service mesh solution and consists of many internal components that work together in concert to achieve its overall functionality. @@ -56,7 +56,7 @@ The mechanism for managing this storage would utilize a CSI Driver. ### CSI Driver -A CSI Driver supports the Container Service Interface (CSI) specification. Dell EMC provides customers with CSI Drivers for its various storage arrays. +A CSI Driver supports the Container Service Interface (CSI) specification. Dell provides customers with CSI Drivers for its various storage arrays. CSM for Authorization intends to support a majority, if not all, of these drivers. A CSI Driver will typically be configured to communicate directly to its intended storage array and as such will be limited in using only the authentication @@ -66,7 +66,7 @@ methods supported by the Storage Array itself, e.g. Basic authentication over TL ### Sidecar Proxy -The CSM for Authorization Sidecar Proxy is a sidecar container that gets "injected" into the CSI Driver's Pod. It acts as a proxy and forwards all requests to a +The CSM for Authorization Sidecar Proxy is deployed as a sidecar in the CSI Driver's Pod. It acts as a proxy and forwards all requests to a CSM Authorization Server. The [CSI Driver section](#csi-driver) noted the limitation of a CSI Driver using Storage Array supported authentication methods only. By nature of being a proxy, the CSM for Authorization @@ -86,12 +86,9 @@ Inbound requests are expected to originate from the CSM for Authorization Sideca The [*karavictl*](../cli) CLI (Command Line Interface) application allows Storage Admins to manage and interact with a running CSM for Authorization Server. -Additionally, *karavictl* provides functionality for supporting the sidecar proxy injection mechanism mentioned above. Injection is discussed in more detail later -on in this document. - ### Storage Array -A Storage Array is typically considered to be one of the various Dell EMC storage offerings, e.g. Dell EMC PowerFlex which is supported by CSM for Authorization +A Storage Array is typically considered to be one of the various Dell storage offerings, e.g. Dell PowerFlex which is supported by CSM for Authorization today. Support for more Storage Arrays will come in the future. ## How it Works diff --git a/content/v3/authorization/troubleshooting.md b/content/v3/authorization/troubleshooting.md index eef3c64a87..0a47cb4ec8 100644 --- a/content/v3/authorization/troubleshooting.md +++ b/content/v3/authorization/troubleshooting.md @@ -6,9 +6,6 @@ Description: > Troubleshooting guide --- -- [Running `karavictl inject` leaves the vxflexos-controller in a `Pending` state](#running-karavictl-inject-leaves-the-vxflexos-controller-in-a-pending-state) -- [Running `karavictl inject` leaves the powermax-controller in a `Pending` state](#running-karavictl-inject-leaves-the-powermax-controller-in-a-pending-state) -- [Running `karavictl inject` leaves the isilon-controller in a `Pending` state](#running-karavictl-inject-leaves-the-isilon-controller-in-a-pending-state) - [Running `karavictl tenant` commands result in an HTTP 504 error](#running-karavictl-tenant-commands-result-in-an-http-504-error) --- @@ -26,153 +23,6 @@ For OPA related logs, run: $ k3s kubectl logs deploy/proxy-server -n karavi -c opa ``` -### Running "karavictl inject" leaves the vxflexos-controller in a "Pending" state -This situation may occur when the number of vxflexos-controller pods that are deployed is equal to the number of schedulable nodes. -``` -$ kubectl get pods -n vxflexos - -NAME READY STATUS RESTARTS AGE -vxflexos-controller-696cc5945f-4t94d 0/6 Pending 0 3m2s -vxflexos-controller-75cdcbc5db-k25zx 5/5 Running 0 3m41s -vxflexos-controller-75cdcbc5db-nkxqh 5/5 Running 0 3m42s -vxflexos-node-mjc74 3/3 Running 0 2m44s -vxflexos-node-zgswp 3/3 Running 0 2m44s -``` - -__Resolution__ - -To resolve this issue, we need to temporarily reduce the number of replicas that the driver deployment is using. - -1. Edit the deployment - ``` - $ kubectl edit -n vxflexos deploy/vxflexos-controller - ``` - -2. Find `replicas` under the `spec` section of the deployment manifest. -3. Reduce the number of `replicas` by 1 -4. Save the file -5. Confirm that the updated controller pods have been deployed - ``` - $ kubectl get pods -n vxflexos - - NAME READY STATUS RESTARTS AGE - vxflexos-controller-696cc5945f-4t94d 6/6 Running 0 4m41s - vxflexos-node-mjc74 3/3 Running 0 3m44s - vxflexos-node-zgswp 3/3 Running 0 3m44s - ``` - -6. Edit the deployment again -7. Find `replicas` under the `spec` section of the deployment manifest. -8. Increase the number of `replicas` by 1 -9. Save the file -10. Confirm that the updated controller pods have been deployed - ``` - $ kubectl get pods -n vxflexos - - NAME READY STATUS RESTARTS AGE - vxflexos-controller-696cc5945f-4t94d 6/6 Running 0 5m41s - vxflexos-controller-696cc5945f-6xxhb 6/6 Running 0 5m41s - vxflexos-node-mjc74 3/3 Running 0 4m44s - vxflexos-node-zgswp 3/3 Running 0 4m44s - ``` - -### Running "karavictl inject" leaves the powermax-controller in a "Pending" state -This situation may occur when the number of powermax-controller pods that are deployed is equal to the number of schedulable nodes. -``` -$ kubectl get pods -n powermax - -NAME READY STATUS RESTARTS AGE -powermax-controller-58d8779f5d-v7t56 0/6 Pending 0 25s -powermax-controller-78f749847-jqphx 5/5 Running 0 10m -powermax-controller-78f749847-w6vp5 5/5 Running 0 10m -powermax-node-gx5pk 3/3 Running 0 21s -powermax-node-k5gwc 3/3 Running 0 17s -``` - -__Resolution__ - -To resolve this issue, we need to temporarily reduce the number of replicas that the driver deployment is using. - -1. Edit the deployment - ``` - $ kubectl edit -n powermax deploy/powermax-controller - ``` - -2. Find `replicas` under the `spec` section of the deployment manifest. -3. Reduce the number of `replicas` by 1 -4. Save the file -5. Confirm that the updated controller pods have been deployed - ``` - $ kubectl get pods -n powermax - NAME READY STATUS RESTARTS AGE - powermax-controller-58d8779f5d-cqx8d 6/6 Running 0 22s - powermax-node-gx5pk 3/3 Running 3 8m3s - powermax-node-k5gwc 3/3 Running 3 7m59s - ``` - -6. Edit the deployment again -7. Find `replicas` under the `spec` section of the deployment manifest. -8. Increase the number of `replicas` by 1 -9. Save the file -10. Confirm that the updated controller pods have been deployed - ``` - $ kubectl get pods -n powermax - NAME READY STATUS RESTARTS AGE - powermax-controller-58d8779f5d-cqx8d 6/6 Running 0 22s - powermax-controller-58d8779f5d-v7t56 6/6 Running 22 8m7s - powermax-node-gx5pk 3/3 Running 3 8m3s - powermax-node-k5gwc 3/3 Running 3 7m59s - ``` - -### Running "karavictl inject" leaves the isilon-controller in a "Pending" state -This situation may occur when the number of Isilon controller pods that are deployed is equal to the number of schedulable nodes. -``` -$ kubectl get pods -n isilon - -NAME READY STATUS RESTARTS AGE -isilon-controller-58d8779f5d-v7t56 0/6 Pending 0 25s -isilon-controller-78f749847-jqphx 5/5 Running 0 10m -isilon-controller-78f749847-w6vp5 5/5 Running 0 10m -isilon-node-gx5pk 3/3 Running 0 21s -isilon-node-k5gwc 3/3 Running 0 17s -``` - -__Resolution__ - -To resolve this issue, we need to temporarily reduce the number of replicas that the driver deployment is using. - -1. Edit the deployment - ``` - $ kubectl edit -n deploy/isilon-controller - ``` - -2. Find `replicas` under the `spec` section of the deployment manifest. -3. Reduce the number of `replicas` by 1 -4. Save the file -5. Confirm that the updated controller pods have been deployed - ``` - $ kubectl get pods -n isilon - - NAME READY STATUS RESTARTS AGE - isilon-controller-696cc5945f-4t94d 6/6 Running 0 4m41s - isilon-node-mjc74 3/3 Running 0 3m44s - isilon-node-zgswp 3/3 Running 0 3m44s - ``` - -6. Edit the deployment again -7. Find `replicas` under the `spec` section of the deployment manifest. -8. Increase the number of `replicas` by 1 -9. Save the file -10. Confirm that the updated controller pods have been deployed - ``` - $ kubectl get pods -n isilon - NAME READY STATUS RESTARTS AGE - isilon-controller-58d8779f5d-cqx8d 6/6 Running 0 22s - isilon-controller-58d8779f5d-v7t56 6/6 Running 22 8m7s - isilon-node-gx5pk 3/3 Running 3 8m3s - isilon-node-k5gwc 3/3 Running 3 7m59s - ``` - ### Running "karavictl tenant" commands result in an HTTP 504 error This situation may occur if there are Iptables or other firewall rules preventing communication with the provided ``: ``` diff --git a/content/v3/authorization/uninstallation.md b/content/v3/authorization/uninstallation.md index 4b8fad3b53..fcbcb37aa2 100644 --- a/content/v3/authorization/uninstallation.md +++ b/content/v3/authorization/uninstallation.md @@ -3,7 +3,7 @@ title: Uninstallation linktitle: Uninstallation weight: 2 description: > - Dell EMC Container Storage Modules (CSM) for Authorization Uninstallation + Dell Technologies (Dell) Container Storage Modules (CSM) for Authorization Uninstallation --- This section outlines the uninstallation steps for Container Storage Modules (CSM) for Authorization. diff --git a/content/v3/authorization/upgrade.md b/content/v3/authorization/upgrade.md index ba9a487365..4c31e3a926 100644 --- a/content/v3/authorization/upgrade.md +++ b/content/v3/authorization/upgrade.md @@ -3,12 +3,12 @@ title: Upgrade linktitle: Upgrade weight: 3 description: > - Upgrade Dell EMC Container Storage Modules (CSM) for Authorization + Upgrade Dell Technologies (Dell) Container Storage Modules (CSM) for Authorization --- This section outlines the upgrade steps for Container Storage Modules (CSM) for Authorization. The upgrade of CSM for Authorization is handled in 2 parts: - Upgrading the CSM for Authorization proxy server -- Upgrading the Dell EMC CSI drivers with CSM for Authorization enabled +- Upgrading the Dell CSI drivers with CSM for Authorization enabled ### Upgrading CSM for Authorization proxy server @@ -29,7 +29,7 @@ k3s kubectl version >__Note__: The above steps manage install and upgrade of all dependencies that are required by the CSM for Authorization proxy server. -### Upgrading Dell EMC CSI Driver(s) with CSM for Authorization enabled +### Upgrading Dell CSI Driver(s) with CSM for Authorization enabled Given a setup where the CSM for Authorization proxy server is already upgraded to the latest version, follow the upgrade instructions for the applicable CSI Driver(s) to upgrade the driver and the CSM for Authorization sidecar diff --git a/content/v3/contributionguidelines/_index.md b/content/v3/contributionguidelines/_index.md index 19b639c316..e02b519065 100644 --- a/content/v3/contributionguidelines/_index.md +++ b/content/v3/contributionguidelines/_index.md @@ -3,7 +3,7 @@ title: "Contribution Guidelines" linkTitle: "Contribution Guidelines" weight: 12 Description: > - Dell EMC Container Storage Modules (CSM) docs Contribution Guidelines + Dell Technologies (Dell) Container Storage Modules (CSM) docs Contribution Guidelines --- diff --git a/content/v3/csidriver/_index.md b/content/v3/csidriver/_index.md index a778a41266..495c29b500 100644 --- a/content/v3/csidriver/_index.md +++ b/content/v3/csidriver/_index.md @@ -2,11 +2,11 @@ --- title: "CSI Drivers" linkTitle: "CSI Drivers" -description: About Dell EMC CSI Drivers +description: About Dell Technologies (Dell) CSI Drivers weight: 3 --- -The CSI Drivers by Dell EMC implement an interface between [CSI](https://kubernetes-csi.github.io/docs/) (CSI spec v1.5) enabled Container Orchestrator (CO) and Dell EMC Storage Arrays. It is a plug-in that is installed into Kubernetes to provide persistent storage using Dell storage system. +The CSI Drivers by Dell implement an interface between [CSI](https://kubernetes-csi.github.io/docs/) (CSI spec v1.5) enabled Container Orchestrator (CO) and Dell Storage Arrays. It is a plug-in that is installed into Kubernetes to provide persistent storage using Dell storage system. ![CSI Architecture](Architecture_Diagram.png) @@ -14,54 +14,57 @@ The CSI Drivers by Dell EMC implement an interface between [CSI](https://kuberne ### Supported Operating Systems/Container Orchestrator Platforms {{}} -| | PowerMax | PowerFlex |   Unity| PowerScale | PowerStore | +| | PowerMax | PowerFlex | Unity | PowerScale | PowerStore | |---------------|:----------------:|:-------------------:|:----------------:|:-----------------:|:----------------:| -| Kubernetes | 1.20, 1.21, 1.22 | 1.20, 1.21, 1.22 | 1.20, 1.21, 1.22 | 1.20, 1.21, 1.22 | 1.20, 1.21, 1.22 | +| Kubernetes | 1.21, 1.22, 1.23 | 1.21, 1.22, 1.23 | 1.21, 1.22, 1.23 | 1.21, 1.22, 1.23 | 1.21, 1.22, 1.23 | | RHEL | 7.x,8.x | 7.x,8.x | 7.x,8.x | 7.x,8.x | 7.x,8.x | -| Ubuntu | 20.04 | 20.04 | 18.04, 20.04 | 18.04, 20.04 | 20.04 | +| Ubuntu | 20.04 | 20.04 | 18.04, 20.04 | 18.04, 20.04 | 20.04 | | CentOS | 7.8, 7.9 | 7.8, 7.9 | 7.8, 7.9 | 7.8, 7.9 | 7.8, 7.9 | -| SLES | 15SP3 | 15SP3 | 15SP3 | 15SP3 | 15SP3 | -| Red Hat OpenShift | 4.8, 4.8 EUS, 4.9 | 4.8, 4.8 EUS, 4.9 | 4.8, 4.8 EUS, 4.9 | 4.8, 4.8 EUS, 4.9 | 4.8, 4.8 EUS, 4.9 | -| Mirantis Kubernetes Engine | 3.4.x | 3.4.x | 3.4.x | 3.4.x | 3.4.x | -| Google Anthos | 1.6 | 1.8 | no | 1.9 | 1.9 | -| VMware Tanzu | no | no | NFS | NFS | NFS | -| Rancher Kubernetes Engine | yes | yes | yes | yes | yes | +| SLES | 15SP3 | 15SP3 | 15SP3 | 15SP3 | 15SP3 | +| Red Hat OpenShift | 4.8, 4.8 EUS, 4.9 | 4.8, 4.8 EUS, 4.9 | 4.8, 4.8 EUS, 4.9 | 4.8, 4.8 EUS, 4.9 | 4.8, 4.8 EUS, 4.9 | +| Mirantis Kubernetes Engine | 3.4.x | 3.4.x | 3.5.x | 3.4.x | 3.4.x | +| Google Anthos | 1.6 | 1.8 | no | 1.9 | 1.9 | +| VMware Tanzu | no | no | NFS | NFS | NFS | +| Rancher Kubernetes Engine | yes | yes | yes | yes | yes | +| Amazon Elastic Kubernetes Service
Anywhere | no | yes | no | no | yes | + {{
}} ### CSI Driver Capabilities {{}} -| Features | PowerMax | PowerFlex |    Unity | PowerScale | PowerStore | -|--------------------------|:--------:|:------------------:|:---------:|:-----------------:|:----------:| -| CSI Specification | v1.5 | v1.5| v1.5 | v1.5 | v1.5 | -| Static Provisioning | yes | yes| yes | yes | yes | -| Dynamic Provisioning | yes | yes| yes | yes | yes | -| Expand Persistent Volume | yes | yes| yes | yes | yes | -| Create VolumeSnapshot | yes | yes| yes | yes | yes | -| Create Volume from Snapshot | yes | yes| yes | yes | yes | -| Delete Snapshot | yes | yes| yes | yes | yes | -| [Access Mode](https://kubernetes.io/docs/concepts/storage/persistent-volumes/#access-modes) | RWO
(FC/iSCSI)
RWO/
RWX/
ROX
(Raw block) | RWO
RWO/
RWX/
ROX/
RWOP
(Raw block) | RWO/RWOP
(FC/iSCSI)
RWO/RWX/
RWOP
(RawBlock)
RWO/RWX/ROX/
RWOP
(NFS) | RWO/RWX/ROX/
RWOP | RWO/RWOP
(FC/iSCSI)
RWO/
RWX/
ROX/
RWOP
(RawBlock, NFS) | -| CSI Volume Cloning | yes | yes | yes | yes | yes | -| CSI Raw Block Volume | yes | yes | yes | no | yes | -| CSI Ephemeral Volume | no | yes | yes | yes | yes | -| Topology | yes | yes | yes | yes | yes | -| Multi-array | yes | yes | yes | yes | yes | -| Volume Health Monitoring | no | yes | yes | yes | yes | +| Features | PowerMax | PowerFlex | Unity | PowerScale | PowerStore | +|--------------------------|:--------:|:---------:|:------:|:----------:|:----------:| +| CSI Driver version | 2.2.0 | 2.2.0 | 2.2.0 | 2.2.0 | 2.2.0 | +| Static Provisioning | yes | yes | yes | yes | yes | +| Dynamic Provisioning | yes | yes | yes | yes | yes | +| Expand Persistent Volume | yes | yes | yes | yes | yes | +| Create VolumeSnapshot | yes | yes | yes | yes | yes | +| Create Volume from Snapshot | yes | yes | yes | yes | yes | +| Delete Snapshot | yes | yes | yes | yes | yes | +| [Access Mode](https://kubernetes.io/docs/concepts/storage/persistent-volumes/#access-modes)| RWO/
RWOP(FC/iSCSI)
RWO/
RWX/
ROX/
RWOP(Raw block) | RWO/ROX/RWOP

RWX (Raw block only) | RWO/ROX/RWOP

RWX (Raw block & NFS only) | RWO/RWX/ROX/
RWOP | RWO/RWOP
(FC/iSCSI)
RWO/
RWX/
ROX/
RWOP
(RawBlock, NFS) | +| CSI Volume Cloning | yes | yes | yes | yes | yes | +| CSI Raw Block Volume | yes | yes | yes | no | yes | +| CSI Ephemeral Volume | no | yes | yes | yes | yes | +| Topology | yes | yes | yes | yes | yes | +| Multi-array | yes | yes | yes | yes | yes | +| Volume Health Monitoring | yes | yes | yes | yes | yes | {{
}} ### Supported Storage Platforms {{}} -| | PowerMax | PowerFlex |   Unity| PowerScale | PowerStore | -|---------------|:----------------:|:-------------------:|:----------------:|:-----------------:|:----------------:| -| Storage Array |5978.479.479, 5978.669.669, 5978.711.711, Unisphere 9.2| 3.5.x, 3.6.x | 5.0.5, 5.0.6, 5.0.7, 5.1.0 | OneFS 8.1, 8.2, 9.0, 9.1, 9.2, 9.3 | 1.0.x, 2.0.x | +| | PowerMax | PowerFlex | Unity | PowerScale | PowerStore | +|---------------|:-------------------------------------------------------:|:----------------:|:--------------------------:|:----------------------------------:|:----------------:| +| Storage Array |5978.479.479, 5978.711.711
Unisphere 9.2| 3.5.x, 3.6.x | 5.0.7, 5.1.0, 5.1.2 | OneFS 8.1, 8.2, 9.0, 9.1, 9.2, 9.3 | 1.0.x, 2.0.x, 2.1.x | {{
}} ### Backend Storage Details {{}} -| Features | PowerMax | PowerFlex |   Unity | PowerScale| PowerStore | +| Features | PowerMax | PowerFlex | Unity | PowerScale | PowerStore | |---------------|:----------------:|:------------------:|:----------------:|:----------------:|:----------------:| | Fibre Channel | yes | N/A | yes | N/A | yes | | iSCSI | yes | N/A | yes | N/A | yes | +| NVMeTCP | N/A | N/A | N/A | N/A | yes | | NFS | N/A | N/A | yes | yes | yes | | Other | N/A | ScaleIO protocol | N/A | N/A | N/A | -| Supported FS | ext4 / xfs | ext4 / xfs | ext3 / ext4 / xfs / NFS | NFS | ext3 / ext4 / xfs / NFS | -| Thin / Thick provisioning | Thin | Thin | Thin/Thick | N/A | Thin | +| Supported FS | ext4 / xfs | ext4 / xfs | ext3 / ext4 / xfs / NFS | NFS | ext3 / ext4 / xfs / NFS | +| Thin / Thick provisioning | Thin | Thin | Thin/Thick | N/A | Thin | | Platform-specific configurable settings | Service Level selection
iSCSI CHAP | - | Host IO Limit
Tiering Policy
NFS Host IO size
Snapshot Retention duration | Access Zone
NFS version (3 or 4);Configurable Export IPs | iSCSI CHAP | {{
}} diff --git a/content/v3/csidriver/features/powerflex.md b/content/v3/csidriver/features/powerflex.md index c92a4d993c..6353aa6f58 100644 --- a/content/v3/csidriver/features/powerflex.md +++ b/content/v3/csidriver/features/powerflex.md @@ -7,7 +7,7 @@ Description: Code features for PowerFlex Driver ## Volume Snapshot Feature -The CSI PowerFlex driver version 2.0 and higher supports v1 snapshots on Kubernetes 1.20/1.21/1.22. +The CSI PowerFlex driver version 2.0 and higher supports v1 snapshots on Kubernetes 1.21/1.22/1.23. In order to use Volume Snapshots, ensure the following components are deployed to your cluster: - Kubernetes Volume Snapshot CRDs @@ -84,26 +84,25 @@ spec: This feature extends CSI specification to add the capability to create crash-consistent snapshots of a group of volumes. This feature is available as a technical preview. To use this feature, users have to deploy the csi-volumegroupsnapshotter side-car as part of the PowerFlex driver. Once the sidecar has been deployed, users can make snapshots by using yaml files such as this one: ``` -apiVersion: volumegroup.storage.dell.com/v1alpha2 +apiVersion: volumegroup.storage.dell.com/v1 kind: DellCsiVolumeGroupSnapshot metadata: - # Name must be 13 characters or less in length name: "vg-snaprun1" namespace: "helmtest-vxflexos" spec: # Add fields here driverName: "csi-vxflexos.dellemc.com" # defines how to process VolumeSnapshot members when volume group snapshot is deleted - # "retain" - keep VolumeSnapshot instances - # "delete" - delete VolumeSnapshot instances - memberReclaimPolicy: "retain" + # "Retain" - keep VolumeSnapshot instances + # "Delete" - delete VolumeSnapshot instances + memberReclaimPolicy: "Retain" volumesnapshotclass: "vxflexos-snapclass" pvcLabel: "vgs-snap-label" # pvcList: # - "pvcName1" # - "pvcName2" ``` -In the metadata section, the name is limited to 13 characters because the snapshotter will append a timestamp to it. Additionally, the pvcLabel field specifies a label that must be present in PVCs that are to be snapshotted. Here is a sample of that portion of a .yaml for a PVC: +The pvcLabel field specifies a label that must be present in PVCs that are to be snapshotted. Here is a sample of that portion of a .yaml for a PVC: ``` metadata: name: pvol0 @@ -291,7 +290,7 @@ metadata: annotations: meta.helm.sh/release-name: vxflexos meta.helm.sh/release-namespace: vxflexos - storageclass.beta.kubernetes.io/is-default-class: "true" + storageclass.kubernetes.io/is-default-class: "true" creationTimestamp: "2020-05-27T13:24:55Z" labels: app.kubernetes.io/managed-by: Helm diff --git a/content/v3/csidriver/features/powermax.md b/content/v3/csidriver/features/powermax.md index 315786176c..55a57131c9 100644 --- a/content/v3/csidriver/features/powermax.md +++ b/content/v3/csidriver/features/powermax.md @@ -122,7 +122,7 @@ When challenged, the host initiator transmits a CHAP credential and CHAP secret ## Custom Driver Name -With version 1.3.0 of the driver, a custom name can be assigned to the driver at the time of installation. This enables installation of the CSI driver in a different namespace and installation of multiple CSI drivers for Dell EMC PowerMax in the same Kubernetes/OpenShift cluster. +With version 1.3.0 of the driver, a custom name can be assigned to the driver at the time of installation. This enables installation of the CSI driver in a different namespace and installation of multiple CSI drivers for Dell PowerMax in the same Kubernetes/OpenShift cluster. To use this feature, set the following values under `customDriverName` in `my-powermax-settings.yaml`. - Value: Set this to the custom name of the driver. @@ -140,7 +140,7 @@ For example, if the driver name is set to _driver_ and it is installed in the na ### Install multiple drivers -To install multiple CSI Drivers for Dell EMC PowerMax in a single Kubernetes cluster, you can take advantage of the custom driver name feature. There are a few important restrictions that should be strictly adhered to: +To install multiple CSI Drivers for Dell PowerMax in a single Kubernetes cluster, you can take advantage of the custom driver name feature. There are a few important restrictions that should be strictly adhered to: - Only one driver can be installed in a single namespace - Different drivers should not connect to a single Unisphere server - Different drivers should not be used to manage a single PowerMax array @@ -176,7 +176,7 @@ kind: StorageClass metadata: name: powermax-expand-sc annotations: - storageclass.beta.kubernetes.io/is-default-class: false + storageclass.kubernetes.io/is-default-class: false provisioner: csi-powermax.dellemc.com reclaimPolicy: Delete allowVolumeExpansion: true #Set this attribute to true if you plan to expand any PVCs @@ -458,3 +458,32 @@ To update the log level dynamically, the user has to edit the ConfigMap `powerma ``` kubectl edit configmap -n powermax powermax-config-params ``` + +## Volume Health Monitoring + +CSI Driver for Dell PowerMax 2.2.0 and above supports volume health monitoring. To enable Volume Health Monitoring from the node side, the alpha feature gate CSIVolumeHealth needs to be enabled. To use this feature, set controller.healthMonitor.enabled and node.healthMonitor.enabled to true. To change the monitor interval, set controller.healthMonitor.interval parameter. + +## Single Pod Access Mode for PersistentVolumes- ReadWriteOncePod (ALPHA FEATURE) + +Use `ReadWriteOncePod(RWOP)` access mode if you want to ensure that only one pod across the whole cluster can read that PVC or write to it. This is only supported for CSI Driver for PowerMax 2.2.0+ and Kubernetes version 1.22+. + +To use this feature, enable the ReadWriteOncePod feature gate for kube-apiserver, kube-scheduler, and kubelet, by setting command line arguments: +`--feature-gates="...,ReadWriteOncePod=true"` + +### Creating a PersistentVolumeClaim +```yaml +kind: PersistentVolumeClaim +apiVersion: v1 +metadata: + name: single-writer-only +spec: + accessModes: + - ReadWriteOncePod # the volume can be mounted as read-write by a single pod across the whole cluster + resources: + requests: + storage: 1Gi +``` + +When this feature is enabled, the existing `ReadWriteOnce(RWO)` access mode restricts volume access to a single node and allows multiple pods on the same node to read from and write to the same volume. + +To migrate existing PersistentVolumes to use `ReadWriteOncePod`, please follow the instruction from [here](https://kubernetes.io/blog/2021/09/13/read-write-once-pod-access-mode-alpha/#migrating-existing-persistentvolumes). \ No newline at end of file diff --git a/content/v3/csidriver/features/powerscale.md b/content/v3/csidriver/features/powerscale.md index 98536afa97..acaee8b878 100644 --- a/content/v3/csidriver/features/powerscale.md +++ b/content/v3/csidriver/features/powerscale.md @@ -129,7 +129,7 @@ Following are the manifests for the Volume Snapshot Class: 1. VolumeSnapshotClass ```yaml -# For kubernetes version 20 and above (v1 snaps) + apiVersion: snapshot.storage.k8s.io/v1 kind: VolumeSnapshotClass metadata: @@ -192,6 +192,8 @@ spec: storage: 5Gi ``` +> Starting from CSI PowerScale driver version 2.2, it is allowed to create PersistentVolumeClaim from VolumeSnapshot with different isi paths i.e., isi paths of the new volume and the VolumeSnapshot can be different. + ## Volume Expansion The CSI PowerScale driver version 1.2 and later supports the expansion of Persistent Volumes (PVs). This expansion can be done either online (for example, when a PVC is attached to a node) or offline (for example, when a PVC is not attached to any node). @@ -206,7 +208,7 @@ kind: StorageClass metadata: name: isilon-expand-sc annotations: - storageclass.beta.kubernetes.io/is-default-class: "false" + storageclass.kubernetes.io/is-default-class: "false" provisioner: "csi-isilon.dellemc.com" reclaimPolicy: Delete parameters: @@ -424,7 +426,7 @@ For a cluster with multiple network interfaces and if a user wants to segregate ## Volume Limit -The CSI Driver for Dell EMC PowerScale allows users to specify the maximum number of PowerScale volumes that can be used in a node. +The CSI Driver for Dell PowerScale allows users to specify the maximum number of PowerScale volumes that can be used in a node. The user can set the volume limit for a node by creating a node label `max-isilon-volumes-per-node` and specifying the volume limit for that node.
`kubectl label node max-isilon-volumes-per-node=` @@ -441,7 +443,7 @@ Similarly, users can define the tolerations based on various conditions like mem ## Usage of SmartQuotas to Limit Storage Consumption -CSI driver for Dell EMC Isilon handles capacity limiting using SmartQuotas feature. +CSI driver for Dell Isilon handles capacity limiting using SmartQuotas feature. To use the SmartQuotas feature user can specify the boolean value 'enableQuota' in myvalues.yaml or my-isilon-settings.yaml. @@ -494,7 +496,7 @@ kubectl edit configmap -n isilon isilon-config-params ## NAT Support -CSI Driver for Dell EMC PowerScale is supported in the NAT environment. +CSI Driver for Dell PowerScale is supported in the NAT environment. ## Configurable permissions for volume directory @@ -531,7 +533,7 @@ Other ways of configuring powerscale volume permissions remain the same as helm- ## PV/PVC Metrics -CSI Driver for Dell EMC PowerScale 2.1.0 and above supports volume health monitoring. This allows Kubernetes to report on the condition, status and usage of the underlying volumes. +CSI Driver for Dell PowerScale 2.1.0 and above supports volume health monitoring. This allows Kubernetes to report on the condition, status and usage of the underlying volumes. For example, if a volume were to be deleted from the array, or unmounted outside of Kubernetes, Kubernetes will now report these abnormal conditions as events. ### This feature can be enabled @@ -540,7 +542,7 @@ For example, if a volume were to be deleted from the array, or unmounted outside ## Single Pod Access Mode for PersistentVolumes- ReadWriteOncePod (ALPHA FEATURE) -Use `ReadWriteOncePod(RWOP)` access mode if you want to ensure that only one pod across the whole cluster can read that PVC or write to it. This is only supported for CSI Driver for PowerScale 2.1.0 and Kubernetes version 1.22+. +Use `ReadWriteOncePod(RWOP)` access mode if you want to ensure that only one pod across the whole cluster can read that PVC or write to it. This is supported for CSI Driver for PowerScale 2.1.0+ and Kubernetes version 1.22+. To use this feature, enable the ReadWriteOncePod feature gate for kube-apiserver, kube-scheduler, and kubelet, by setting command line arguments: `--feature-gates="...,ReadWriteOncePod=true"` diff --git a/content/v3/csidriver/features/powerstore.md b/content/v3/csidriver/features/powerstore.md index d05d280695..1f5b1fb50e 100644 --- a/content/v3/csidriver/features/powerstore.md +++ b/content/v3/csidriver/features/powerstore.md @@ -183,7 +183,7 @@ kind: StorageClass metadata: name: powerstore-expand-sc annotations: - storageclass.beta.kubernetes.io/is-default-class: false + storageclass.kubernetes.io/is-default-class: false provisioner: csi-powerstore.dellemc.com reclaimPolicy: Delete allowVolumeExpansion: true # Set this attribute to true if you plan to expand any PVCs created using this storage class @@ -340,6 +340,7 @@ To create `NFS` volume you need to provide `nasName:` parameters that point to t volumeAttributes: size: "20Gi" nasName: "csi-nas-name" + nfsAcls: "0777" ``` ## Controller HA @@ -413,7 +414,7 @@ allowedTopologies: - "true" ``` -This example matches all nodes where the driver has a connection to PowerStore with an IP of `127.0.0.1` via FibreChannel. Similar examples can be found in mentioned folder for NFS and iSCSI. +This example matches all nodes where the driver has a connection to PowerStore with an IP of `127.0.0.1` via FibreChannel. Similar examples can be found in mentioned folder for NFS, iSCSI and NVMe. You can check what labels your nodes contain by running `kubectl get nodes --show-labels` @@ -424,7 +425,7 @@ For any additional information about the topology, see the [Kubernetes Topology ## Reuse PowerStore hostname -The CSI PowerStore driver version 1.2 and later can automatically detect if the current node was already registered as a Host on the storage array before. It will check if Host initiators and node initiators (FC or iSCSI) match. If they do, the driver will not create a new host and will take the existing name of the Host as nodeID. +The CSI PowerStore driver version 1.2 and later can automatically detect if the current node was already registered as a Host on the storage array before. It will check if Host initiators and node initiators (FC, iSCSI or NVMe) match. If they do, the driver will not create a new host and will take the existing name of the Host as nodeID. ## Multiarray support @@ -444,8 +445,10 @@ Create a file called `config.yaml` and populate it with the following content password: "password" # password for connecting to API skipCertificateValidation: true # use insecure connection or not default: true # treat current array as a default (would be used by storage classes without arrayIP parameter) - blockProtocol: "ISCSI" # what SCSI transport protocol use on node side (FC, ISCSI, None, or auto) - nasName: "nas-server" # what NAS must be used for NFS volumes + blockProtocol: "ISCSI" # what transport protocol use on node side (FC, ISCSI, NVMeTCP, None, or auto) + nasName: "nas-server" # what NAS must be used for NFS volumes + nfsAcls: "0777" # (Optional) defines permissions - POSIX mode bits or NFSv4 ACLs, to be set on NFS target mount directory. + # NFSv4 ACls are supported for NFSv4 shares on NFSv4 enabled NAS servers only. POSIX ACLs are not supported and only POSIX mode bits are supported for NFSv3 shares. - endpoint: "https://10.0.0.2/api/rest" globalID: "unique" username: "user" @@ -604,14 +607,14 @@ kubectl edit configmap -n csi-powerstore powerstore-config-params ## NAT Support -CSI Driver for Dell EMC Powerstore is supported in the NAT environment for NFS protocol. +CSI Driver for Dell Powerstore is supported in the NAT environment for NFS protocol. The user will be able to install the driver and able to create pods. ## PV/PVC Metrics -CSI Driver for Dell EMC Powerstore 2.1.0 and above supports volume health monitoring. To enable Volume Health Monitoring from the node side, the alpha feature gate CSIVolumeHealth needs to be enabled. To use this feature, set controller.healthMonitor.enabled and node.healthMonitor.enabled to true. To change the monitor interval, set controller.healthMonitor.volumeHealthMonitorInterval parameter. +CSI Driver for Dell Powerstore 2.1.0 and above supports volume health monitoring. To enable Volume Health Monitoring from the node side, the alpha feature gate CSIVolumeHealth needs to be enabled. To use this feature, set controller.healthMonitor.enabled and node.healthMonitor.enabled to true. To change the monitor interval, set controller.healthMonitor.volumeHealthMonitorInterval parameter. ## Single Pod Access Mode for PersistentVolumes @@ -638,3 +641,37 @@ spec: ``` >Note: The access mode ReadWriteOnce allows multiple pods to access a single volume within a single worker node and the behavior is consistent across all supported Kubernetes versions. + +## POSIX mode bits and NFSv4 ACLs + +CSI PowerStore driver version 2.2.0 and later allows users to set user-defined permissions on NFS target mount directory using POSIX mode bits or NFSv4 ACLs. + +NFSv4 ACLs are supported for NFSv4 shares on NFSv4 enabled NAS servers only. Please ensure the order when providing the NFSv4 ACLs. + +To use this feature, provide permissions in `nfsAcls` parameter in values.yaml, secrets or NFS storage class. + +For example: + +1. POSIX mode bits + +```yaml +nfsAcls: "0755" +``` + +2. NFSv4 ACLs + +```yaml +nfsAcls: "A::OWNER@:rwatTnNcCy,A::GROUP@:rxtncy,A::EVERYONE@:rxtncy,A::user@domain.com:rxtncy" +``` + +>Note: If no values are specified, default value of "0777" will be set. +>POSIX ACLs are not supported and only POSIX mode bits are supported for NFSv3 shares. + + +## NVMe/TCP Support + +CSI Driver for Dell Powerstore 2.2.0 and above supports NVMe/TCP provisioning. To enable NVMe/TCP provisioning, blockProtocol on secret should be specified as `NVMeTCP`. +In case blockProtocol is specified as `auto`, the driver will be able to find the initiators on the host and choose the protocol accordingly. If the host has multiple protocols enabled, then FC gets the highest priority followed by iSCSI and then NVMeTCP. + +>Note: NVMe/TCP is not supported on RHEL 7.x versions and CoreOS. +>NVMe/TCP is supported with Powerstore 2.1 and above. diff --git a/content/v3/csidriver/features/unity.md b/content/v3/csidriver/features/unity.md index b24ad1c022..7559245396 100644 --- a/content/v3/csidriver/features/unity.md +++ b/content/v3/csidriver/features/unity.md @@ -185,12 +185,12 @@ kind: StorageClass metadata: name: unity-expand-sc annotations: - storageclass.beta.kubernetes.io/is-default-class: false + storageclass.kubernetes.io/is-default-class: false provisioner: csi-unity.dellemc.com reclaimPolicy: Delete allowVolumeExpansion: true # Set this attribute to true if you plan to expand any PVCs created using this storage class parameters: - FsType: xfs + csi.storage.k8s.io/fstype: "xfs" ``` To resize a PVC, edit the existing PVC spec and set spec.resources.requests.storage to the intended size. For example, if you have a PVC unity-pvc-demo of size 3Gi, then you can resize it to 30Gi by updating the PVC. @@ -215,7 +215,7 @@ spec: ## Raw block support -The CSI Unity driver version 1.4 and later supports Raw Block Volumes. +The CSI Unity driver supports Raw Block Volumes. Raw Block volumes are created using the volumeDevices list in the pod template spec with each entry accessing a volumeClaimTemplate specifying a volumeMode: Block. The following is an example configuration: ```yaml @@ -310,7 +310,7 @@ spec: ## Ephemeral Inline Volume -The CSI Unity driver version 1.4 and later supports ephemeral inline CSI volumes. This feature allows CSI volumes to be specified directly in the pod specification. +The CSI Unity driver supports ephemeral inline CSI volumes. This feature allows CSI volumes to be specified directly in the pod specification. At runtime, nested inline volumes follow the ephemeral lifecycle of their associated pods where the driver handles all phases of volume operations as pods are created and destroyed. @@ -353,7 +353,7 @@ To create `NFS` volume you need to provide `nasName:` parameters that point to t - name: volume csi: driver: csi-unity.dellemc.com - fsType: "nfs" + csi.storage.k8s.io/fstype: "nfs" volumeAttributes: size: "20Gi" nasName: "csi-nas-name" @@ -361,7 +361,7 @@ To create `NFS` volume you need to provide `nasName:` parameters that point to t ## Controller HA -The CSI Unity driver version 1.4 and later supports the controller HA feature. Instead of StatefulSet controller pods deployed as a Deployment. +The CSI Unity driver supports controller HA feature. Instead of StatefulSet controller pods deployed as a Deployment. By default, number of replicas is set to 2, you can set the `controllerCount` parameter to 1 in `myvalues.yaml` if you want to disable controller HA for your installation. When installing via Operator you can change the `replicas` parameter in the `spec.driver` section in your Unity Custom Resource. @@ -407,7 +407,7 @@ As said before you can configure where node driver pods would be assigned in a s ## Topology -The CSI Unity driver version 1.4 and later supports Topology which forces volumes to be placed on worker nodes that have connectivity to the backend storage. This covers use cases where users have chosen to restrict the nodes on which the CSI driver is deployed. +The CSI Unity driver supports Topology which forces volumes to be placed on worker nodes that have connectivity to the backend storage. This covers use cases where users have chosen to restrict the nodes on which the CSI driver is deployed. This Topology support does not include customer-defined topology, users cannot create their own labels for nodes, they should use whatever labels are returned by the driver and applied automatically by Kubernetes on its nodes. @@ -441,37 +441,23 @@ You can check what labels your nodes contain by running `kubectl get nodes --sho For any additional information about the topology, see the [Kubernetes Topology documentation](https://kubernetes-csi.github.io/docs/topology.html). -## Support for SLES 15 SP2 - -The CSI Driver for Dell EMC Unity requires the following set of packages installed on all worker nodes that run on SLES 15 SP2. - - - open-iscsi **open-iscsi is required in order to make use of iSCSI protocol for provisioning** - - nfs-utils **nfs-utils is required in order to make use of NFS protocol for provisioning** - - multipath-tools **multipath-tools is required in order to make use of FC and iSCSI protocols for provisioning** - - After installing open-iscsi, ensure "iscsi" and "iscsid" services have been started and /etc/isci/initiatorname.iscsi is created and has the host initiator id. The pre-requisites are mandatory for provisioning with the iSCSI protocol to work. - ## Volume Limit -The CSI Driver for Dell EMC Unity allows users to specify the maximum number of Unity volumes that can be used in a node. +The CSI Driver for Dell Unity allows users to specify the maximum number of Unity volumes that can be used in a node. The user can set the volume limit for a node by creating a node label `max-unity-volumes-per-node` and specifying the volume limit for that node.
`kubectl label node max-unity-volumes-per-node=` The user can also set the volume limit for all the nodes in the cluster by specifying the same to `maxUnityVolumesPerNode` attribute in values.yaml file. ->**NOTE:**
To reflect the changes after setting the value either via node label or in values.yaml file, user has to bounce the driver controller and node pods using the command `kubectl get pods -n unity --no-headers=true | awk '/unity-/{print $1}'| xargs kubectl delete -n unity pod`.

If the value is set both by node label and values.yaml file then node label value will get the precedence and user has to remove the node label in order to reflect the values.yaml value.

The default value of `maxUnityVolumesPerNode` is 0.

If `maxUnityVolumesPerNode` is set to zero, then CO SHALL decide how many volumes of this type can be published by the controller to the node.

The volume limit specified to `maxUnityVolumesPerNode` attribute is applicable to all the nodes in the cluster for which node label `max-unity-volumes-per-node` is not set. +>**NOTE:**
To reflect the changes after setting the value either via node label or in values.yaml file, user has to bounce the driver controller and node pods using the command `kubectl get pods -n unity --no-headers=true | awk '/unity-/{print $1}'| xargs kubectl delete -n unity pod`.

If the value is set both by node label and values.yaml file then node label value will get the precedence and user has to remove the node label in order to reflect the values.yaml value.

The default value of `maxUnityVolumesPerNode` is 0.

If `maxUnityVolumesPerNode` is set to zero, then Container Orchestration decides how many volumes of this type can be published by the controller to the node.

The volume limit specified to `maxUnityVolumesPerNode` attribute is applicable to all the nodes in the cluster for which node label `max-unity-volumes-per-node` is not set. ## NAT Support -CSI Driver for Dell EMC Unity is supported in the NAT environment for NFS protocol. +CSI Driver for Dell Unity is supported in the NAT environment for NFS protocol. The user will be able to install the driver and able to create pods. -## Dynamic Logging Configuration - -This feature is introduced in CSI Driver for unity version 2.0.0. - ## Single Pod Access Mode for PersistentVolumes -CSI Driver for Unity now supports a new accessmode `ReadWriteOncePod` for PersistentVolumes and PersistentVolumeClaims. With this feature, CSI Driver for Unity allows to restrict volume access to a single pod in the cluster +CSI Driver for Unity supports a new accessmode `ReadWriteOncePod` for PersistentVolumes and PersistentVolumeClaims. With this feature, CSI Driver for Unity allows to restrict volume access to a single pod in the cluster Prerequisites 1. Enable the ReadWriteOncePod feature gate for kube-apiserver, kube-scheduler, and kubelet as the ReadWriteOncePod access mode is in alpha for Kubernetes v1.22 and is only supported for CSI volumes. You can enable the feature by setting command line arguments: @@ -491,12 +477,14 @@ spec: ``` ## Volume Health Monitoring -CSI Driver for Unity now supports volume health monitoring. This is an alpha feature and requires feature gate to be enabled by setting command line arguments `--feature-gates="...,CSIVolumeHealth=true"`. +CSI Driver for Unity supports volume health monitoring. This is an alpha feature and requires feature gate to be enabled by setting command line arguments `--feature-gates="...,CSIVolumeHealth=true"`. This feature: 1. Reports on the condition of the underlying volumes via events when a volume condition is abnormal. We can watch the events on the describe of pvc `kubectl describe pvc -n ` 2. Collects the volume stats. We can see the volume usage in the node logs `kubectl logs -n -c driver` -By default this is disabled in CSI Driver for Unity. You will have to set the `volumeHealthMonitor.enable` flag for controller, node or for both in `values.yaml` to get the volume stats and volume condition. +By default this is disabled in CSI Driver for Unity. You will have to set the `healthMonitor.enable` flag for controller, node or for both in `values.yaml` to get the volume stats and volume condition. +## Dynamic Logging Configuration +This feature is introduced in CSI Driver for unity version 2.0.0. ### Helm based installation As part of driver installation, a ConfigMap with the name `unity-config-params` is created, which contains an attribute `CSI_LOG_LEVEL` which specifies the current log level of CSI driver. @@ -554,7 +542,7 @@ apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: annotations: - storageclass.beta.kubernetes.io/is-default-class: "false" + storageclass.kubernetes.io/is-default-class: "false" name: unity-nfs parameters: arrayId: "APM0***XXXXXX" @@ -643,7 +631,7 @@ data: CSI_LOG_LEVEL: "info" ALLOW_RWO_MULTIPOD_ACCESS: "false" MAX_UNITY_VOLUMES_PER_NODE: "0" - SYNC_NODE_INFO_TIME_INTERVAL: "0" + SYNC_NODE_INFO_TIME_INTERVAL: "15" TENANT_NAME: "" ``` >Note: csi-unity supports Tenancy in multi-array setup, provided the TenantName is the same across Unity instances. diff --git a/content/v3/csidriver/installation/helm/isilon.md b/content/v3/csidriver/installation/helm/isilon.md index 966de5509f..08d51943eb 100644 --- a/content/v3/csidriver/installation/helm/isilon.md +++ b/content/v3/csidriver/installation/helm/isilon.md @@ -3,7 +3,7 @@ title: PowerScale description: > Installing CSI Driver for PowerScale via Helm --- -The CSI Driver for Dell EMC PowerScale can be deployed by using the provided Helm v3 charts and installation scripts on both Kubernetes and OpenShift platforms. For more detailed information on the installation scripts, review the script [documentation](https://github.com/dell/csi-powerscale/tree/master/dell-csi-helm-installer). +The CSI Driver for Dell PowerScale can be deployed by using the provided Helm v3 charts and installation scripts on both Kubernetes and OpenShift platforms. For more detailed information on the installation scripts, review the script [documentation](https://github.com/dell/csi-powerscale/tree/master/dell-csi-helm-installer). The controller section of the Helm chart installs the following components in a _Deployment_ in the specified namespace: - CSI Driver for PowerScale @@ -18,16 +18,17 @@ The node section of the Helm chart installs the following component in a _Daemon ## Prerequisites -The following are requirements to be met before installing the CSI Driver for Dell EMC PowerScale: +The following are requirements to be met before installing the CSI Driver for Dell PowerScale: - Install Kubernetes or OpenShift (see [supported versions](../../../../csidriver/#features-and-capabilities)) - Install Helm 3 - Mount propagation is enabled on container runtime that is being used - If using Snapshot feature, satisfy all Volume Snapshot requirements - If enabling CSM for Authorization, please refer to the [Authorization deployment steps](../../../../authorization/deployment/) first +- If enabling CSM for Replication, please refer to the [Replication deployment steps](../../../../replication/deployment/) first ### Install Helm 3.0 -Install Helm 3.0 on the master node before you install the CSI Driver for Dell EMC PowerScale. +Install Helm 3.0 on the master node before you install the CSI Driver for Dell PowerScale. **Steps** @@ -44,20 +45,50 @@ controller: ``` #### Volume Snapshot CRD's -The Kubernetes Volume Snapshot CRDs can be obtained and installed from the external-snapshotter project on Github. Manifests are available here:[v4.2.x](https://github.com/kubernetes-csi/external-snapshotter/tree/v4.2.0/client/config/crd) +The Kubernetes Volume Snapshot CRDs can be obtained and installed from the external-snapshotter project on Github. Manifests are available here:[v5.0.x](https://github.com/kubernetes-csi/external-snapshotter/tree/v5.0.1/client/config/crd) #### Volume Snapshot Controller The CSI external-snapshotter sidecar is split into two controllers: - A common snapshot controller - A CSI external-snapshotter sidecar -The common snapshot controller must be installed only once in the cluster irrespective of the number of CSI drivers installed in the cluster. On OpenShift clusters 4.4 and later, the common snapshot-controller is pre-installed. In the clusters where it is not present, it can be installed using `kubectl` and the manifests are available here: [v4.2.x](https://github.com/kubernetes-csi/external-snapshotter/tree/v4.2.0/deploy/kubernetes/snapshot-controller) +The common snapshot controller must be installed only once in the cluster irrespective of the number of CSI drivers installed in the cluster. On OpenShift clusters 4.4 and later, the common snapshot-controller is pre-installed. In the clusters where it is not present, it can be installed using `kubectl` and the manifests are available here: [v5.0.x](https://github.com/kubernetes-csi/external-snapshotter/tree/v5.0.1/deploy/kubernetes/snapshot-controller) *NOTE:* - The manifests available on GitHub install the snapshotter image: - [quay.io/k8scsi/csi-snapshotter:v4.0.x](https://quay.io/repository/k8scsi/csi-snapshotter?tag=v4.0.0&tab=tags) - The CSI external-snapshotter sidecar is still installed along with the driver and does not involve any extra configuration. +## Volume Health Monitoring + +Volume Health Monitoring feature is optional and by default this feature is disabled for drivers when installed via helm. +To enable this feature, add the below block to the driver manifest before installing the driver. This ensures to install external +health monitor sidecar. To get the volume health state value under controller should be set to true as seen below. To get the +volume stats value under node should be set to true. + ```yaml +controller: + healthMonitor: + # enabled: Enable/Disable health monitor of CSI volumes + # Allowed values: + # true: enable checking of health condition of CSI volumes + # false: disable checking of health condition of CSI volumes + # Default value: None + enabled: false + # healthMonitorInterval: Interval of monitoring volume health condition + # Allowed values: Number followed by unit (s,m,h) + # Examples: 60s, 5m, 1h + # Default value: 60s + interval: 60s +node: + healthMonitor: + # enabled: Enable/Disable health monitor of CSI volumes- volume usage, volume condition + # Allowed values: + # true: enable checking of health condition of CSI volumes + # false: disable checking of health condition of CSI volumes + # Default value: None + enabled: false + ``` + #### Installation example You can install CRDs and the default snapshot controller by running the following commands: @@ -65,17 +96,31 @@ You can install CRDs and the default snapshot controller by running the followin git clone https://github.com/kubernetes-csi/external-snapshotter/ cd ./external-snapshotter git checkout release- -kubectl create -f client/config/crd -kubectl create -f deploy/kubernetes/snapshot-controller +kubectl kustomize client/config/crd | kubectl create -f - +kubectl -n kube-system kustomize deploy/kubernetes/snapshot-controller | kubectl create -f - ``` *NOTE:* -- It is recommended to use 4.2.x version of snapshotter/snapshot-controller. +- It is recommended to use 5.0.x version of snapshotter/snapshot-controller. + +### (Optional) Replication feature Requirements + +Applicable only if you decided to enable the Replication feature in `values.yaml` + +```yaml +replication: + enabled: true +``` +#### Replication CRD's + +The CRDs for replication can be obtained and installed from the csm-replication project on Github. Use `csm-replication/deploy/replicationcrds.all.yaml` located in the csm-replication git repo for the installation. + +CRDs should be configured during replication prepare stage with repctl as described in [install-repctl](../../../../replication/deployment/install-repctl) ## Install the Driver **Steps** -1. Run `git clone -b v2.1.0 https://github.com/dell/csi-powerscale.git` to clone the git repository. +1. Run `git clone -b v2.2.0 https://github.com/dell/csi-powerscale.git` to clone the git repository. 2. Ensure that you have created the namespace where you want to install the driver. You can run `kubectl create namespace isilon` to create a new one. The use of "isilon" as the namespace is just an example. You can choose any name for the namespace. 3. Collect information from the PowerScale Systems like IP address, IsiPath, username, and password. Make a note of the value for these parameters as they must be entered in the *secret.yaml*. 4. Copy *the helm/csi-isilon/values.yaml* into a new location with name say *my-isilon-settings.yaml*, to customize settings for installation. @@ -93,6 +138,7 @@ kubectl create -f deploy/kubernetes/snapshot-controller | verbose | Indicates what content of the OneFS REST API message should be logged in debug level logs | Yes | 1 | | kubeletConfigDir | Specify kubelet config dir path | Yes | "/var/lib/kubelet" | | enableCustomTopology | Indicates PowerScale FQDN/IP which will be fetched from node label and the same will be used by controller and node pod to establish a connection to Array. This requires enableCustomTopology to be enabled. | No | false | + | fsGroupPolicy | Defines which FS Group policy mode to be used, Supported modes `None, File and ReadWriteOnceWithFSType` | No | "ReadWriteOnceWithFSType" | | ***controller*** | Configure controller pod specific parameters | | | | controllerCount | Defines the number of csi-powerscale controller pods to deploy to the Kubernetes release| Yes | 2 | | volumeNamePrefix | Defines a string prefix for the names of PersistentVolumes created | Yes | "k8s" | @@ -103,6 +149,9 @@ kubectl create -f deploy/kubernetes/snapshot-controller | healthMonitor.interval | Interval of monitoring volume health condition | Yes | 60s | | nodeSelector | Define node selection constraints for pods of controller deployment | No | | | tolerations | Define tolerations for the controller deployment, if required | No | | + | leader-election-lease-duration | Duration, that non-leader candidates will wait to force acquire leadership | No | 20s | + | leader-election-renew-deadline | Duration, that the acting leader will retry refreshing leadership before giving up | No | 15s | + | leader-election-retry-period | Duration, the LeaderElector clients should wait between tries of actions | No | 5s | | ***node*** | Configure node pod specific parameters | | | | nodeSelector | Define node selection constraints for pods of node daemonset | No | | | tolerations | Define tolerations for the node daemonset, if required | No | | @@ -111,6 +160,7 @@ kubectl create -f deploy/kubernetes/snapshot-controller | ***PLATFORM ATTRIBUTES*** | | | | | endpointPort | Define the HTTPs port number of the PowerScale OneFS API server. If authorization is enabled, endpointPort should be the HTTPS localhost port that the authorization sidecar will listen on. This value acts as a default value for endpointPort, if not specified for a cluster config in secret. | No | 8080 | | skipCertificateValidation | Specify whether the PowerScale OneFS API server's certificate chain and hostname must be verified. This value acts as a default value for skipCertificateValidation, if not specified for a cluster config in secret. | No | true | + | isiAuthType | Indicates the authentication method to be used. If set to 1 then it follows as session-based authentication else basic authentication | No | 0 | | isiAccessZone | Define the name of the access zone a volume can be created in. If storageclass is missing with AccessZone parameter, then value of isiAccessZone is used for the same. | No | System | | enableQuota | Indicates whether the provisioner should attempt to set (later unset) quota on a newly provisioned volume. This requires SmartQuotas to be enabled.| No | true | | isiPath | Define the base path for the volumes to be created on PowerScale cluster. This value acts as a default value for isiPath, if not specified for a cluster config in secret| No | /ifs/data/csi | @@ -121,7 +171,7 @@ kubectl create -f deploy/kubernetes/snapshot-controller | sidecarProxyImage | Image for csm-authorization-sidecar. | No | " " | | proxyHost | Hostname of the csm-authorization server. | No | Empty | | skipCertificateValidation | A boolean that enables/disables certificate validation of the csm-authorization server. | No | true | - + *NOTE:* - ControllerCount parameter value must not exceed the number of nodes in the Kubernetes cluster. Otherwise, some of the controller pods remain in a "Pending" state till new nodes are available for scheduling. The installer exits with a WARNING on the same. @@ -141,6 +191,7 @@ kubectl create -f deploy/kubernetes/snapshot-controller | skipCertificateValidation | Specify whether the PowerScale OneFS API server's certificate chain and hostname must be verified. | No | default value from values.yaml | | endpointPort | Specify the HTTPs port number of the PowerScale OneFS API server | No | default value from values.yaml | | isiPath | The base path for the volumes to be created on PowerScale cluster. Note: IsiPath parameter in storageclass, if present will override this attribute. | No | default value from values.yaml | + | mountEndpoint | Endpoint of the PowerScale OneFS API server, for example, 10.0.0.1. This must be specified if [CSM-Authorization](https://github.com/dell/karavi-authorization) is enabled. | No | - | The username specified in *secret.yaml* must be from the authentication providers of PowerScale. The user must have enough privileges to perform the actions. The suggested privileges are as follows: @@ -164,7 +215,7 @@ Create isilon-creds secret using the following command: - For the key isiIP/endpoint, the user can give either IP address or FQDN. Also, the user can prefix 'https' (For example, https://192.168.1.1) with the value. - The *isilon-creds* secret has a *mountEndpoint* parameter which should only be updated and used when [Authorization](../../../../authorization) is enabled. -7. Install OneFS CA certificates by following the instructions from the next section, if you want to validate OneFS API server's certificates. If not, create an empty secret using the following command and an empty secret must be created for the successful installation of CSI Driver for Dell EMC PowerScale. +7. Install OneFS CA certificates by following the instructions from the next section, if you want to validate OneFS API server's certificates. If not, create an empty secret using the following command and an empty secret must be created for the successful installation of CSI Driver for Dell PowerScale. ``` kubectl create -f empty-secret.yaml ``` @@ -196,7 +247,7 @@ If the 'skipCertificateValidation' parameter is set to false and a previous inst ### Dynamic update of array details via secret.yaml -CSI Driver for Dell EMC PowerScale now provides supports for Multi cluster. Now users can link the single CSI Driver to multiple OneFS Clusters by updating *secret.yaml*. Users can now update the isilon-creds secret by editing the *secret.yaml* and executing the following command +CSI Driver for Dell PowerScale now provides supports for Multi cluster. Now users can link the single CSI Driver to multiple OneFS Clusters by updating *secret.yaml*. Users can now update the isilon-creds secret by editing the *secret.yaml* and executing the following command `kubectl create secret generic isilon-creds -n isilon --from-file=config=secret.yaml -o yaml --dry-run=client | kubectl apply -f -` @@ -206,11 +257,11 @@ CSI Driver for Dell EMC PowerScale now provides supports for Multi cluster. Now ## Storage Classes -The CSI driver for Dell EMC PowerScale version 1.5 and later, `dell-csi-helm-installer` does not create any storage classes as part of the driver installation. A sample storage class manifest is available at `samples/storageclass/isilon.yaml`. Use this sample manifest to create a storageclass to provision storage; uncomment/ update the manifest as per the requirements. +The CSI driver for Dell PowerScale version 1.5 and later, `dell-csi-helm-installer` does not create any storage classes as part of the driver installation. A sample storage class manifest is available at `samples/storageclass/isilon.yaml`. Use this sample manifest to create a storageclass to provision storage; uncomment/ update the manifest as per the requirements. ### What happens to my existing storage classes? -*Upgrading from CSI PowerScale v2.0 driver* +*Upgrading from CSI PowerScale v2.1 driver*: The storage classes created as part of the installation have an annotation - "helm.sh/resource-policy": keep set. This ensures that even after an uninstall or upgrade, the storage classes are not deleted. You can continue using these storage classes if you wish so. *NOTE*: @@ -232,9 +283,9 @@ Starting CSI PowerScale v1.6, `dell-csi-helm-installer` will not create any Volu ### What happens to my existing Volume Snapshot Classes? -*Upgrading from CSI PowerScale v2.0 driver*: +*Upgrading from CSI PowerScale v2.1 driver*: The existing volume snapshot class will be retained. *Upgrading from an older version of the driver*: -It is strongly recommended to upgrade the earlier versions of CSI PowerScale to 1.6 or higher before upgrading to 2.1. +It is strongly recommended to upgrade the earlier versions of CSI PowerScale to 1.6 or higher before upgrading to 2.2. diff --git a/content/v3/csidriver/installation/helm/powerflex.md b/content/v3/csidriver/installation/helm/powerflex.md index 06354ccfcb..9bdb0ccdc0 100644 --- a/content/v3/csidriver/installation/helm/powerflex.md +++ b/content/v3/csidriver/installation/helm/powerflex.md @@ -5,22 +5,22 @@ description: > Installing the CSI Driver for PowerFlex via Helm --- -The CSI Driver for Dell EMC PowerFlex can be deployed by using the provided Helm v3 charts and installation scripts on both Kubernetes and OpenShift platforms. For more detailed information on the installation scripts, review the script [documentation](https://github.com/dell/csi-powerflex/tree/master/dell-csi-helm-installer). +The CSI Driver for Dell PowerFlex can be deployed by using the provided Helm v3 charts and installation scripts on both Kubernetes and OpenShift platforms. For more detailed information on the installation scripts, review the script [documentation](https://github.com/dell/csi-powerflex/tree/master/dell-csi-helm-installer). The controller section of the Helm chart installs the following components in a _Deployment_ in the specified namespace: -- CSI Driver for Dell EMC PowerFlex +- CSI Driver for Dell PowerFlex - Kubernetes External Provisioner, which provisions the volumes - Kubernetes External Attacher, which attaches the volumes to the containers - Kubernetes External Snapshotter, which provides snapshot support - Kubernetes External Resizer, which resizes the volume The node section of the Helm chart installs the following component in a _DaemonSet_ in the specified namespace: -- CSI Driver for Dell EMC PowerFlex +- CSI Driver for Dell PowerFlex - Kubernetes Node Registrar, which handles the driver registration ## Prerequisites -The following are requirements that must be met before installing the CSI Driver for Dell EMC PowerFlex: +The following are requirements that must be met before installing the CSI Driver for Dell PowerFlex: - Install Kubernetes or OpenShift (see [supported versions](../../../../csidriver/#features-and-capabilities)) - Install Helm 3 - Enable Zero Padding on PowerFlex @@ -33,7 +33,7 @@ The following are requirements that must be met before installing the CSI Driver ### Install Helm 3.0 -Install Helm 3.0 on the master node before you install the CSI Driver for Dell EMC PowerFlex. +Install Helm 3.0 on the master node before you install the CSI Driver for Dell PowerFlex. **Steps** @@ -41,7 +41,7 @@ Install Helm 3.0 on the master node before you install the CSI Driver for Dell E ### Enable Zero Padding on PowerFlex -Verify that zero padding is enabled on the PowerFlex storage pools that will be used. Use PowerFlex GUI or the PowerFlex CLI to check this setting. For more information to configure this setting, see [Dell EMC PowerFlex documentation](https://cpsdocs.dellemc.com/bundle/PF_CONF_CUST/page/GUID-D32BDFF7-3014-4894-8E1E-2A31A86D343A.html). +Verify that zero padding is enabled on the PowerFlex storage pools that will be used. Use PowerFlex GUI or the PowerFlex CLI to check this setting. For more information to configure this setting, see [Dell PowerFlex documentation](https://cpsdocs.dellemc.com/bundle/PF_CONF_CUST/page/GUID-D32BDFF7-3014-4894-8E1E-2A31A86D343A.html). ### Install PowerFlex Storage Data Client @@ -51,17 +51,17 @@ currently only Red Hat CoreOS (RHCOS). On Kubernetes nodes with OS version not supported by automatic install, you must perform the Manual SDC Deployment steps [below](#manual-sdc-deployment). Refer to https://hub.docker.com/r/dellemc/sdc for supported OS versions. -**Optional:** For a typical install, you will pull SDC kernel modules from the Dell EMC FTP site, which is set up by default. Some users might want to mirror this repository to a local location. The [PowerFlex KB article](https://www.dell.com/support/kbdoc/en-us/000184206/how-to-use-a-private-repository-for) has instructions on how to do this. +**Optional:** For a typical install, you will pull SDC kernel modules from the Dell FTP site, which is set up by default. Some users might want to mirror this repository to a local location. The [PowerFlex KB article](https://www.dell.com/support/kbdoc/en-us/000184206/how-to-use-a-private-repository-for) has instructions on how to do this. #### Manual SDC Deployment -For detailed PowerFlex installation procedure, see the [Dell EMC PowerFlex Deployment Guide](https://docs.delltechnologies.com/bundle/VXF_DEPLOY/page/GUID-DD20489C-42D9-42C6-9795-E4694688CC75.html). Install the PowerFlex SDC as follows: +For detailed PowerFlex installation procedure, see the [Dell PowerFlex Deployment Guide](https://docs.delltechnologies.com/bundle/VXF_DEPLOY/page/GUID-DD20489C-42D9-42C6-9795-E4694688CC75.html). Install the PowerFlex SDC as follows: **Steps** -1. Download the PowerFlex SDC from [Dell EMC Online support](https://www.dell.com/support). The filename is EMC-ScaleIO-sdc-*.rpm, where * is the SDC name corresponding to the PowerFlex installation version. +1. Download the PowerFlex SDC from [Dell Online support](https://www.dell.com/support). The filename is EMC-ScaleIO-sdc-*.rpm, where * is the SDC name corresponding to the PowerFlex installation version. 2. Export the shell variable _MDM_IP_ in a comma-separated list using `export MDM_IP=xx.xxx.xx.xx,xx.xxx.xx.xx`, where xxx represents the actual IP address in your environment. This list contains the IP addresses of the MDMs. -3. Install the SDC per the _Dell EMC PowerFlex Deployment Guide_: +3. Install the SDC per the _Dell PowerFlex Deployment Guide_: - For Red Hat Enterprise Linux and CentOS, run `rpm -iv ./EMC-ScaleIO-sdc-*.x86_64.rpm`, where * is the SDC name corresponding to the PowerFlex installation version. 4. To add more MDM_IP for multi-array support, run `/opt/emc/scaleio/sdc/bin/drv_cfg --add_mdm --ip 10.xx.xx.xx.xx,10.xx.xx.xx` @@ -77,14 +77,14 @@ controller: ``` #### Volume Snapshot CRD's -The Kubernetes Volume Snapshot CRDs can be obtained and installed from the external-snapshotter project on Github. Manifests are available here: [v4.2.x](https://github.com/kubernetes-csi/external-snapshotter/tree/v4.2.0/client/config/crd) +The Kubernetes Volume Snapshot CRDs can be obtained and installed from the external-snapshotter project on Github. Manifests are available here: [v5.0.x](https://github.com/kubernetes-csi/external-snapshotter/tree/v5.0.1/client/config/crd) #### Volume Snapshot Controller The CSI external-snapshotter sidecar is split into two controllers: - A common snapshot controller - A CSI external-snapshotter sidecar -The common snapshot controller must be installed only once in the cluster irrespective of the number of CSI drivers installed in the cluster. On OpenShift clusters 4.4 and later, the common snapshot-controller is pre-installed. In the clusters where it is not present, it can be installed using `kubectl` and the manifests are available here: [v4.2.x](https://github.com/kubernetes-csi/external-snapshotter/tree/v4.2.0/deploy/kubernetes/snapshot-controller) +The common snapshot controller must be installed only once in the cluster irrespective of the number of CSI drivers installed in the cluster. On OpenShift clusters 4.4 and later, the common snapshot-controller is pre-installed. In the clusters where it is not present, it can be installed using `kubectl` and the manifests are available here: [v5.0.x](https://github.com/kubernetes-csi/external-snapshotter/tree/v5.0.1/deploy/kubernetes/snapshot-controller) *NOTE:* - The manifests available on GitHub install the snapshotter image: @@ -98,18 +98,18 @@ You can install CRDs and default snapshot controller by running following comman git clone https://github.com/kubernetes-csi/external-snapshotter/ cd ./external-snapshotter git checkout release- -kubectl create -f client/config/crd -kubectl create -f deploy/kubernetes/snapshot-controller +kubectl kustomize client/config/crd | kubectl create -f - +kubectl -n kube-system kustomize deploy/kubernetes/snapshot-controller | kubectl create -f - ``` *NOTE:* -- When using Kubernetes 1.20/1.21/1.22 it is recommended to use 4.2.x version of snapshotter/snapshot-controller. +- When using Kubernetes 1.21/1.22/1.23 it is recommended to use 5.0.x version of snapshotter/snapshot-controller. - The CSI external-snapshotter sidecar is still installed along with the driver and does not involve any extra configuration. ## Install the Driver **Steps** -1. Run `git clone -b v2.1.0 https://github.com/dell/csi-powerflex.git` to clone the git repository. +1. Run `git clone -b v2.2.0 https://github.com/dell/csi-powerflex.git` to clone the git repository. 2. Ensure that you have created a namespace where you want to install the driver. You can run `kubectl create namespace vxflexos` to create a new one. @@ -182,7 +182,9 @@ kubectl create -f deploy/kubernetes/snapshot-controller format and replace/update the secret. - "insecure" parameter has been changed to "skipCertificateValidation" as insecure is deprecated and will be removed from use in config.yaml or secret.yaml in a future release. Users can continue to use any one of "insecure" or "skipCertificateValidation" for now. The driver would return an error if both parameters are used. - Please note that log configuration parameters from v1.5 will no longer work in v2.0 and higher. Please refer to the [Dynamic Logging Configuration](../../../features/powerflex#dynamic-logging-configuration) section in Features for more information. - + - If the user is using complex K8s version like "v1.21.3-mirantis-1", use below kubeVersion check in helm/csi-unity/Chart.yaml file. + kubeVersion: ">= 1.21.0-0 < 1.24.0-0" + 5. Default logging options are set during Helm install. To see possible configuration options, see the [Dynamic Logging Configuration](../../../features/powerflex#dynamic-logging-configuration) section in Features. 6. If using automated SDC deployment: @@ -248,6 +250,7 @@ kubectl create -f deploy/kubernetes/snapshot-controller - This install script also runs the `verify.sh` script. You will be prompted to enter the credentials for each of the Kubernetes nodes. The `verify.sh` script needs the credentials to check if SDC has been configured on all nodes. - It is mandatory to run install script after changes to MDM configuration in `vxflexos-config` secret. Refer [dynamic-array-configuration](../../../features/powerflex#dynamic-array-configuration) +- If an extended Kubernetes version is being used (e.g. `v1.21.3-mirantis-1`) and is failing the version check in Helm even though it falls in the allowed range, then you must go into `helm/csi-vxflexos/Chart.yaml` and replace the standard `kubeVersion` check with the commented-out alternative. *Please note* that this will also allow the use of pre-release alpha and beta versions of Kubernetes, which is not supported. - (Optional) Enable additional Mount Options - A user is able to specify additional mount options as needed for the driver. - Mount options are specified in storageclass yaml under _mkfsFormatOption_. @@ -255,7 +258,7 @@ kubectl create -f deploy/kubernetes/snapshot-controller ## Certificate validation for PowerFlex Gateway REST API calls -This topic provides details about setting up the certificate for the CSI Driver for Dell EMC PowerFlex. +This topic provides details about setting up the certificate for the CSI Driver for Dell PowerFlex. *Before you begin* @@ -333,13 +336,10 @@ Deleting a storage class has no impact on a running Pod with mounted PVCs. You c Starting CSI PowerFlex v1.5, `dell-csi-helm-installer` will not create any Volume Snapshot Class during the driver installation. There is a sample Volume Snapshot Class manifest present in the _samples/_ folder. Please use this sample to create a new Volume Snapshot Class to create Volume Snapshots. -*NOTE* -Support for v1beta1 snapshots is being discontinued in this release. - ### What happens to my existing Volume Snapshot Classes? -*Upgrading from CSI PowerFlex v2.0 driver*: +*Upgrading from CSI PowerFlex v2.1 driver*: The existing volume snapshot class will be retained. *Upgrading from an older version of the driver*: -It is strongly recommended to upgrade the earlier versions of CSI PowerFlex to 1.5 or higher, before upgrading to 2.1. +It is strongly recommended to upgrade the earlier versions of CSI PowerFlex to 1.5 or higher, before upgrading to 2.2. diff --git a/content/v3/csidriver/installation/helm/powermax.md b/content/v3/csidriver/installation/helm/powermax.md index 8c79c2077b..ef8882ce05 100644 --- a/content/v3/csidriver/installation/helm/powermax.md +++ b/content/v3/csidriver/installation/helm/powermax.md @@ -5,23 +5,25 @@ description: > Installing CSI Driver for PowerMax via Helm --- -CSI Driver for Dell EMC PowerMax can be deployed by using the provided Helm v3 charts and installation scripts on both Kubernetes and OpenShift platforms. For more detailed information on the installation scripts, see the script [documentation](https://github.com/dell/csi-powermax/tree/master/dell-csi-helm-installer). +CSI Driver for Dell PowerMax can be deployed by using the provided Helm v3 charts and installation scripts on both Kubernetes and OpenShift platforms. For more detailed information on the installation scripts, see the script [documentation](https://github.com/dell/csi-powermax/tree/master/dell-csi-helm-installer). The controller section of the Helm chart installs the following components in a _Deployment_ in the specified namespace: -- CSI Driver for Dell EMC PowerMax +- CSI Driver for Dell PowerMax - Kubernetes External Provisioner, which provisions the volumes - Kubernetes External Attacher, which attaches the volumes to the containers - Kubernetes External Snapshotter, which provides snapshot support - Kubernetes External Resizer, which resizes the volume -- CSI PowerMax ReverseProxy (optional) +- (optional) Kubernetes External health monitor, which provides volume health status +- (optional) CSI PowerMax ReverseProxy, which maximizes CSI driver and Unisphere performance +- (optional) Dell CSI Replicator, which provides Replication capability. The node section of the Helm chart installs the following component in a _DaemonSet_ in the specified namespace: -- CSI Driver for Dell EMC PowerMax +- CSI Driver for Dell PowerMax - Kubernetes Node Registrar, which handles the driver registration ## Prerequisites -The following requirements must be met before installing CSI Driver for Dell EMC PowerMax: +The following requirements must be met before installing CSI Driver for Dell PowerMax: - Install Kubernetes or OpenShift (see [supported versions](../../../../csidriver/#features-and-capabilities)) - Install Helm 3 - Fibre Channel requirements @@ -34,7 +36,7 @@ The following requirements must be met before installing CSI Driver for Dell EMC ### Install Helm 3 -Install Helm 3 on the master node before you install CSI Driver for Dell EMC PowerMax. +Install Helm 3 on the master node before you install CSI Driver for Dell PowerMax. **Steps** @@ -43,23 +45,23 @@ Install Helm 3 on the master node before you install CSI Driver for Dell EMC Pow ### Fibre Channel Requirements -CSI Driver for Dell EMC PowerMax supports Fibre Channel communication. Ensure that the following requirements are met before you install CSI Driver: +CSI Driver for Dell PowerMax supports Fibre Channel communication. Ensure that the following requirements are met before you install CSI Driver: - Zoning of the Host Bus Adapters (HBAs) to the Fibre Channel port director must be completed. - Ensure that the HBA WWNs (initiators) appear on the list of initiators that are logged into the array. - If the number of volumes that will be published to nodes is high, then configure the maximum number of LUNs for your HBAs on each node. See the appropriate HBA document to configure the maximum number of LUNs. ### iSCSI Requirements -The CSI Driver for Dell EMC PowerMax supports iSCSI connectivity. These requirements are applicable for the nodes that use iSCSI initiator to connect to the PowerMax arrays. +The CSI Driver for Dell PowerMax supports iSCSI connectivity. These requirements are applicable for the nodes that use iSCSI initiator to connect to the PowerMax arrays. Set up the iSCSI initiators as follows: - All Kubernetes nodes must have the _iscsi-initiator-utils_ package installed. - Ensure that the iSCSI initiators are available on all the nodes where the driver node plugin will be installed. -- Kubernetes nodes should have access (network connectivity) to an iSCSI director on the Dell EMC PowerMax array that has IP interfaces. Manually create IP routes for each node that connects to the Dell EMC PowerMax if required. -- Ensure that the iSCSI initiators on the nodes are not a part of any existing Host (Initiator Group) on the Dell EMC PowerMax array. -- The CSI Driver needs the port group names containing the required iSCSI director ports. These port groups must be set up on each Dell EMC PowerMax array. All the port group names supplied to the driver must exist on each Dell EMC PowerMax with the same name. +- Kubernetes nodes should have access (network connectivity) to an iSCSI director on the Dell PowerMax array that has IP interfaces. Manually create IP routes for each node that connects to the Dell PowerMax if required. +- Ensure that the iSCSI initiators on the nodes are not a part of any existing Host (Initiator Group) on the Dell PowerMax array. +- The CSI Driver needs the port group names containing the required iSCSI director ports. These port groups must be set up on each Dell PowerMax array. All the port group names supplied to the driver must exist on each Dell PowerMax with the same name. -For more information about configuring iSCSI, see [Dell EMC Host Connectivity guide](https://www.delltechnologies.com/asset/zh-tw/products/storage/technical-support/docu5128.pdf). +For more information about configuring iSCSI, see [Dell Host Connectivity guide](https://www.delltechnologies.com/asset/zh-tw/products/storage/technical-support/docu5128.pdf). ### Certificate validation for Unisphere REST API calls @@ -80,11 +82,11 @@ If the Unisphere certificate is self-signed or if you are using an embedded Unis There are no restrictions to how many ports can be present in the iSCSI port groups provided to the driver. -The same applies to Fibre Channel where there are no restrictions on the number of FA directors a host HBA can be zoned to. See the best practices for host connectivity to Dell EMC PowerMax to ensure that you have multiple paths to your data volumes. +The same applies to Fibre Channel where there are no restrictions on the number of FA directors a host HBA can be zoned to. See the best practices for host connectivity to Dell PowerMax to ensure that you have multiple paths to your data volumes. ### Linux multipathing requirements -CSI Driver for Dell EMC PowerMax supports Linux multipathing. Configure Linux multipathing before installing the CSI Driver. +CSI Driver for Dell PowerMax supports Linux multipathing. Configure Linux multipathing before installing the CSI Driver. Set up Linux multipathing as follows: @@ -112,7 +114,7 @@ snapshot: ``` #### Volume Snapshot CRD's -The Kubernetes Volume Snapshot CRDs can be obtained and installed from the external-snapshotter project on Github. For installation, use [v4.2.x](https://github.com/kubernetes-csi/external-snapshotter/tree/v4.2.0/client/config/crd) +The Kubernetes Volume Snapshot CRDs can be obtained and installed from the external-snapshotter project on Github. For installation, use [v5.0.x](https://github.com/kubernetes-csi/external-snapshotter/tree/v5.0.1/client/config/crd) #### Volume Snapshot Controller The CSI external-snapshotter sidecar is split into two controllers to support Volume snapshots. @@ -120,7 +122,7 @@ The CSI external-snapshotter sidecar is split into two controllers to support Vo - A common snapshot controller - A CSI external-snapshotter sidecar -The common snapshot controller must be installed only once in the cluster, irrespective of the number of CSI drivers installed in the cluster. On OpenShift clusters 4.4 and later, the common snapshot-controller is pre-installed. In the clusters where it is not present, it can be installed using `kubectl` and the manifests are available here: [v4.2.x](https://github.com/kubernetes-csi/external-snapshotter/tree/v4.2.0/deploy/kubernetes/snapshot-controller) +The common snapshot controller must be installed only once in the cluster, irrespective of the number of CSI drivers installed in the cluster. On OpenShift clusters 4.4 and later, the common snapshot-controller is pre-installed. In the clusters where it is not present, it can be installed using `kubectl` and the manifests are available here: [v5.0.x](https://github.com/kubernetes-csi/external-snapshotter/tree/v5.0.1/deploy/kubernetes/snapshot-controller) *NOTE:* - The manifests available on GitHub install the snapshotter image: @@ -134,12 +136,12 @@ You can install CRDs and the default snapshot controller by running the followin git clone https://github.com/kubernetes-csi/external-snapshotter/ cd ./external-snapshotter git checkout release- -kubectl create -f client/config/crd -kubectl create -f deploy/kubernetes/snapshot-controller +kubectl kustomize client/config/crd | kubectl create -f - +kubectl -n kube-system kustomize deploy/kubernetes/snapshot-controller | kubectl create -f - ``` *NOTE:* -- It is recommended to use 4.2.x version of snapshotter/snapshot-controller. +- It is recommended to use 5.0.x version of snapshotter/snapshot-controller. - The CSI external-snapshotter sidecar is still installed along with the driver and does not involve any extra configuration. ### (Optional) Replication feature Requirements @@ -160,7 +162,7 @@ CRDs should be configured during replication prepare stage with repctl as descri **Steps** -1. Run `git clone -b v2.1.0 https://github.com/dell/csi-powermax.git` to clone the git repository. This will include the Helm charts and dell-csi-helm-installer scripts. +1. Run `git clone -b v2.2.0 https://github.com/dell/csi-powermax.git` to clone the git repository. This will include the Helm charts and dell-csi-helm-installer scripts. 2. Ensure that you have created a namespace where you want to install the driver. You can run `kubectl create namespace powermax` to create a new one 3. Edit the `samples/secret/secret.yaml file, point to the correct namespace, and replace the values for the username and password parameters. These values can be obtained using base64 encoding as described in the following example: @@ -192,11 +194,14 @@ CRDs should be configured during replication prepare stage with repctl as descri | snapshot.enabled | Enable/Disable volume snapshot feature | Yes | true | | snapshot.snapNamePrefix | Defines a string prefix for the names of the Snapshots created | Yes | "snapshot" | | resizer.enabled | Enable/Disable volume expansion feature | Yes | true | +| healthMonitor.enabled | Allows to enable/disable volume health monitor | No | false | +| healthMonitor.interval | Interval of monitoring volume health condition | No | 60s | | nodeSelector | Define node selection constraints for pods of controller deployment | No | | | tolerations | Define tolerations for the controller deployment, if required | No | | | **node** | Allows configuration of the node-specific parameters.| - | - | | tolerations | Add tolerations as per requirement | No | - | | nodeSelector | Add node selectors as per requirement | No | - | +| healthMonitor.enabled | Allows to enable/disable volume health monitor | No | false | | **global**| This section refers to configuration options for both CSI PowerMax Driver and Reverse Proxy | - | - | |defaultCredentialsSecret| This secret name refers to:
1. The Unisphere credentials if the driver is installed without proxy or with proxy in Linked mode.
2. The proxy credentials if the driver is installed with proxy in StandAlone mode.
3. The default Unisphere credentials if credentialsSecret is not specified for a management server.| Yes | powermax-creds | | storageArrays| This section refers to the list of arrays managed by the driver and Reverse Proxy in StandAlone mode.| - | - | @@ -250,11 +255,11 @@ Starting with CSI PowerMax v1.7, `dell-csi-helm-installer` will not create any V ### What happens to my existing Volume Snapshot Classes? -*Upgrading from CSI PowerMax v2.0 driver*: +*Upgrading from CSI PowerMax v2.1 driver*: The existing volume snapshot class will be retained. *Upgrading from an older version of the driver*: -It is strongly recommended to upgrade the earlier versions of CSI PowerMax to 1.7 or higher, before upgrading to 2.1. +It is strongly recommended to upgrade the earlier versions of CSI PowerMax to 1.7 or higher, before upgrading to 2.2. ## Sample values file The following sections have useful snippets from `values.yaml` file which provides more information on how to configure the CSI PowerMax driver along with CSI PowerMax ReverseProxy in various modes diff --git a/content/v3/csidriver/installation/helm/powerstore.md b/content/v3/csidriver/installation/helm/powerstore.md index 868d7c27cd..7b009d83a4 100644 --- a/content/v3/csidriver/installation/helm/powerstore.md +++ b/content/v3/csidriver/installation/helm/powerstore.md @@ -4,26 +4,26 @@ description: > Installing CSI Driver for PowerStore via Helm --- -The CSI Driver for Dell EMC PowerStore can be deployed by using the provided Helm v3 charts and installation scripts on both Kubernetes and OpenShift platforms. For more detailed information on the installation scripts, review the script [documentation](https://github.com/dell/csi-powerstore/tree/master/dell-csi-helm-installer). +The CSI Driver for Dell PowerStore can be deployed by using the provided Helm v3 charts and installation scripts on both Kubernetes and OpenShift platforms. For more detailed information on the installation scripts, review the script [documentation](https://github.com/dell/csi-powerstore/tree/master/dell-csi-helm-installer). The controller section of the Helm chart installs the following components in a _Deployment_ in the specified namespace: -- CSI Driver for Dell EMC PowerStore +- CSI Driver for Dell PowerStore - Kubernetes External Provisioner, which provisions the volumes - Kubernetes External Attacher, which attaches the volumes to the containers - (Optional) Kubernetes External Snapshotter, which provides snapshot support -- Kubernetes External Resizer, which resizes the volume +- (Optional) Kubernetes External Resizer, which resizes the volume The node section of the Helm chart installs the following component in a _DaemonSet_ in the specified namespace: -- CSI Driver for Dell EMC PowerStore +- CSI Driver for Dell PowerStore - Kubernetes Node Registrar, which handles the driver registration ## Prerequisites -The following are requirements to be met before installing the CSI Driver for Dell EMC PowerStore: +The following are requirements to be met before installing the CSI Driver for Dell PowerStore: - Install Kubernetes or OpenShift (see [supported versions](../../../../csidriver/#features-and-capabilities)) - Install Helm 3 -- If you plan to use either the Fibre Channel or iSCSI protocol, refer to either _Fibre Channel requirements_ or _Set up the iSCSI Initiator_ sections below. You can use NFS volumes without FC or iSCSI configuration. -> You can use either the Fibre Channel or iSCSI protocol, but you do not need both. +- If you plan to use either the Fibre Channel or iSCSI or NVMe/TCP protocol, refer to either _Fibre Channel requirements_ or _Set up the iSCSI Initiator_ or _Set up the NVMe/TCP Initiator_ sections below. You can use NFS volumes without FC or iSCSI or NVMe/TCP configuration. +> You can use either the Fibre Channel or iSCSI or NVMe/TCP protocol, but you do not need all the three. > If you want to use preconfigured iSCSI/FC hosts be sure to check that they are not part of any host group - Linux native multipathing requirements @@ -35,7 +35,7 @@ The following are requirements to be met before installing the CSI Driver for De ### Install Helm 3.0 -Install Helm 3.0 on the master node before you install the CSI Driver for Dell EMC PowerStore. +Install Helm 3.0 on the master node before you install the CSI Driver for Dell PowerStore. **Steps** @@ -43,26 +43,39 @@ Install Helm 3.0 on the master node before you install the CSI Driver for Dell E ### Fibre Channel requirements -Dell EMC PowerStore supports Fibre Channel communication. If you use the Fibre Channel protocol, ensure that the -following requirement is met before you install the CSI Driver for Dell EMC PowerStore: +Dell PowerStore supports Fibre Channel communication. If you use the Fibre Channel protocol, ensure that the +following requirement is met before you install the CSI Driver for Dell PowerStore: - Zoning of the Host Bus Adapters (HBAs) to the Fibre Channel port must be done. ### Set up the iSCSI Initiator -The CSI Driver for Dell EMC PowerStore v1.4 and higher supports iSCSI connectivity. +The CSI Driver for Dell PowerStore v1.4 and higher supports iSCSI connectivity. If you use the iSCSI protocol, set up the iSCSI initiators as follows: - Ensure that the iSCSI initiators are available on both Controller and Worker nodes. -- Kubernetes nodes must have access (network connectivity) to an iSCSI port on the Dell EMC PowerStore array that -has IP interfaces. Manually create IP routes for each node that connects to the Dell EMC PowerStore. +- Kubernetes nodes must have access (network connectivity) to an iSCSI port on the Dell PowerStore array that +has IP interfaces. Manually create IP routes for each node that connects to the Dell PowerStore. - All Kubernetes nodes must have the _iscsi-initiator-utils_ package for CentOS/RHEL or _open-iscsi_ package for Ubuntu installed, and the _iscsid_ service must be enabled and running. To do this, run the `systemctl enable --now iscsid` command. - Ensure that the unique initiator name is set in _/etc/iscsi/initiatorname.iscsi_. -For information about configuring iSCSI, see _Dell EMC PowerStore documentation_ on Dell EMC Support. +For information about configuring iSCSI, see _Dell PowerStore documentation_ on Dell Support. + + +### Set up the NVMe/TCP Initiator + +If you want to use the protocol, set up the NVMe/TCP initiators as follows: +- The driver requires NVMe management command-line interface (nvme-cli) to use configure, edit, view or start the NVMe client and target. The nvme-cli utility provides a command-line and interactive shell option. The NVMe CLI tool is installed in the host using the below command. +`sudo apt install nvme-cli` + +- Modules including the nvme, nvme_core, nvme_fabrics, and nvme_tcp are required for using NVMe over Fabrics using TCP. Load the NVMe and NVMe-OF Modules using the below commands: +```bash +modprobe nvme +modprobe nvme_tcp +``` ### Linux multipathing requirements -Dell EMC PowerStore supports Linux multipathing. Configure Linux multipathing before installing the CSI Driver for Dell EMC +Dell PowerStore supports Linux multipathing. Configure Linux multipathing before installing the CSI Driver for Dell PowerStore. Set up Linux multipathing as follows: @@ -82,7 +95,7 @@ snapshot: ``` #### Volume Snapshot CRD's -The Kubernetes Volume Snapshot CRDs can be obtained and installed from the external-snapshotter project on Github. Use [v4.2.x](https://github.com/kubernetes-csi/external-snapshotter/tree/v4.2.0/client/config/crd) for the installation. +The Kubernetes Volume Snapshot CRDs can be obtained and installed from the external-snapshotter project on Github. Use [v5.0.x](https://github.com/kubernetes-csi/external-snapshotter/tree/v5.0.1/client/config/crd) for the installation. #### Volume Snapshot Controller The CSI external-snapshotter sidecar is split into two controllers: @@ -90,13 +103,45 @@ The CSI external-snapshotter sidecar is split into two controllers: - A CSI external-snapshotter sidecar The common snapshot controller must be installed only once in the cluster irrespective of the number of CSI drivers installed in the cluster. On OpenShift clusters 4.4 and later, the common snapshot-controller is pre-installed. In the clusters where it is not present, it can be installed using `kubectl` and the manifests are available: -Use [v4.2.x](https://github.com/kubernetes-csi/external-snapshotter/tree/v4.2.0/deploy/kubernetes/snapshot-controller) for the installation. +Use [v5.0.x](https://github.com/kubernetes-csi/external-snapshotter/tree/v5.0.1/deploy/kubernetes/snapshot-controller) for the installation. *NOTE:* - The manifests available on GitHub install the snapshotter image: - [quay.io/k8scsi/csi-snapshotter:v4.0.x](https://quay.io/repository/k8scsi/csi-snapshotter?tag=v4.0.0&tab=tags) - The CSI external-snapshotter sidecar is still installed along with the driver and does not involve any extra configuration. +## Volume Health Monitoring + +Volume Health Monitoring feature is optional and by default this feature is disabled for drivers when installed via helm. +To enable this feature, add the below block to the driver manifest before installing the driver. This ensures to install external +health monitor sidecar. To get the volume health state value under controller should be set to true as seen below. To get the +volume stats value under node should be set to true. + ```yaml +controller: + healthMonitor: + # enabled: Enable/Disable health monitor of CSI volumes + # Allowed values: + # true: enable checking of health condition of CSI volumes + # false: disable checking of health condition of CSI volumes + # Default value: None + enabled: false + + # healthMonitorInterval: Interval of monitoring volume health condition + # Allowed values: Number followed by unit (s,m,h) + # Examples: 60s, 5m, 1h + # Default value: 60s + volumeHealthMonitorInterval: 60s + +node: + healthMonitor: + # enabled: Enable/Disable health monitor of CSI volumes- volume usage, volume condition + # Allowed values: + # true: enable checking of health condition of CSI volumes + # false: disable checking of health condition of CSI volumes + # Default value: None + enabled: false + ``` + #### Installation example You can install CRDs and default snapshot controller by running following commands: @@ -104,12 +149,12 @@ You can install CRDs and default snapshot controller by running following comman git clone https://github.com/kubernetes-csi/external-snapshotter/ cd ./external-snapshotter git checkout release- -kubectl create -f client/config/crd -kubectl create -f deploy/kubernetes/snapshot-controller +kubectl kustomize client/config/crd | kubectl create -f - +kubectl -n kube-system kustomize deploy/kubernetes/snapshot-controller | kubectl create -f - ``` *NOTE:* -- It is recommended to use 4.2.x version of snapshotter/snapshot-controller. +- It is recommended to use 5.0.x version of snapshotter/snapshot-controller. - The CSI external-snapshotter sidecar is installed along with the driver and does not involve any extra configuration. ### (Optional) Replication feature Requirements @@ -129,7 +174,7 @@ CRDs should be configured during replication prepare stage with repctl as descri ## Install the Driver **Steps** -1. Run `git clone -b v2.1.0 https://github.com/dell/csi-powerstore.git` to clone the git repository. +1. Run `git clone -b v2.2.0 https://github.com/dell/csi-powerstore.git` to clone the git repository. 2. Ensure that you have created namespace where you want to install the driver. You can run `kubectl create namespace csi-powerstore` to create a new one. "csi-powerstore" is just an example. You can choose any name for the namespace. But make sure to align to the same namespace during the whole installation. 3. Check `helm/csi-powerstore/driver-image.yaml` and confirm the driver image points to new image. @@ -139,8 +184,10 @@ CRDs should be configured during replication prepare stage with repctl as descri - *username*, *password*: defines credentials for connecting to array. - *skipCertificateValidation*: defines if we should use insecure connection or not. - *isDefault*: defines if we should treat the current array as a default. - - *blockProtocol*: defines what SCSI transport protocol we should use (FC, ISCSI, None, or auto). + - *blockProtocol*: defines what transport protocol we should use (FC, ISCSI, NVMeTCP, None, or auto). - *nasName*: defines what NAS should be used for NFS volumes. + - *nfsAcls* (Optional): defines permissions - POSIX mode bits or NFSv4 ACLs, to be set on NFS target mount directory. + NFSv4 ACls are supported for NFSv4 shares on NFSv4 enabled NAS servers only. POSIX ACLs are not supported and only POSIX mode bits are supported for NFSv3 shares. Add more blocks similar to above for each PowerStore array if necessary. 5. Create storage classes using ones from `samples/storageclass` folder as an example and apply them to the Kubernetes cluster by running `kubectl create -f ` @@ -157,6 +204,7 @@ CRDs should be configured during replication prepare stage with repctl as descri | externalAccess | Defines additional entries for hostAccess of NFS volumes, single IP address and subnet are valid entries | No | " " | | kubeletConfigDir | Defines kubelet config path for cluster | Yes | "/var/lib/kubelet" | | imagePullPolicy | Policy to determine if the image should be pulled prior to starting the container. | Yes | "IfNotPresent" | +| nfsAcls | Defines permissions - POSIX mode bits or NFSv4 ACLs, to be set on NFS target mount directory. | No | "0777" | | connection.enableCHAP | Defines whether the driver should use CHAP for iSCSI connections or not | No | False | | controller.controllerCount | Defines number of replicas of controller deployment | Yes | 2 | | controller.volumeNamePrefix | Defines the string added to each volume that the CSI driver creates | No | "csivol" | @@ -172,6 +220,7 @@ CRDs should be configured during replication prepare stage with repctl as descri | node.healthMonitor.enabled | Allows to enable/disable volume health monitor | No | false | | node.nodeSelector | Defines what nodes would be selected for pods of node daemonset | Yes | " " | | node.tolerations | Defines tolerations that would be applied to node daemonset | Yes | " " | +| fsGroupPolicy | Defines which FS Group policy mode to be used, Supported modes `None, File and ReadWriteOnceWithFSType` | No | "ReadWriteOnceWithFSType" | 8. Install the driver using `csi-install.sh` bash script by running `./csi-install.sh --namespace csi-powerstore --values ./my-powerstore-settings.yaml` - After that the driver should be installed, you can check the condition of driver pods by running `kubectl get all -n csi-powerstore` @@ -187,7 +236,7 @@ CRDs should be configured during replication prepare stage with repctl as descri ## Storage Classes -The CSI driver for Dell EMC PowerStore version 1.3 and later, `dell-csi-helm-installer` does not create any storage classes as part of the driver installation. A wide set of annotated storage class manifests have been provided in the `samples/storageclass` folder. Use these samples to create new storage classes to provision storage. +The CSI driver for Dell PowerStore version 1.3 and later, `dell-csi-helm-installer` does not create any storage classes as part of the driver installation. A wide set of annotated storage class manifests have been provided in the `samples/storageclass` folder. Use these samples to create new storage classes to provision storage. ### What happens to my existing storage classes? @@ -201,13 +250,14 @@ There are samples storage class yaml files available under `samples/storageclass 1. Edit the sample storage class yaml file and update following parameters: - *arrayID*: specifies what storage cluster the driver should use, if not specified driver will use storage cluster specified as `default` in `samples/secret/secret.yaml` -- *FsType*: specifies what filesystem type driver should use, possible variants `ext4`, `xfs`, `nfs`, if not specified driver will use `ext4` by default. +- *FsType*: specifies what filesystem type driver should use, possible variants `ext3`, `ext4`, `xfs`, `nfs`, if not specified driver will use `ext4` by default. +- *nfsAcls* (Optional): defines permissions - POSIX mode bits or NFSv4 ACLs, to be set on NFS target mount directory. - *allowedTopologies* (Optional): If you want you can also add topology constraints. ```yaml allowedTopologies: - matchLabelExpressions: - key: csi-powerstore.dellemc.com/12.34.56.78-iscsi -# replace "-iscsi" with "-fc" or "-nfs" at the end to use FC or NFS enabled hosts +# replace "-iscsi" with "-fc", "-nvme" or "-nfs" at the end to use FC, NVMe or NFS enabled hosts # replace "12.34.56.78" with PowerStore endpoint IP values: - "true" @@ -226,11 +276,11 @@ Starting CSI PowerStore v1.4, `dell-csi-helm-installer` will not create any Volu ### What happens to my existing Volume Snapshot Classes? -*Upgrading from CSI PowerStore v2.0 driver*: +*Upgrading from CSI PowerStore v2.1 driver*: The existing volume snapshot class will be retained. *Upgrading from an older version of the driver*: -It is strongly recommended to upgrade the earlier versions of CSI PowerStore to 1.4 or higher, before upgrading to 2.1. +It is strongly recommended to upgrade the earlier versions of CSI PowerStore to 1.4 or higher, before upgrading to 2.2. ## Dynamically update the powerstore secrets @@ -253,4 +303,4 @@ cd dell-csi-helm-installer ./csi-install.sh --namespace csi-powerstore --values ./my-powerstore-settings.yaml --upgrade ``` -Note: here `my-powerstore-settings.yaml` is a `values.yaml` file which user has used for driver installation. \ No newline at end of file +Note: here `my-powerstore-settings.yaml` is a `values.yaml` file which user has used for driver installation. diff --git a/content/v3/csidriver/installation/helm/unity.md b/content/v3/csidriver/installation/helm/unity.md index 1c7c5122fc..0db49246f5 100644 --- a/content/v3/csidriver/installation/helm/unity.md +++ b/content/v3/csidriver/installation/helm/unity.md @@ -4,7 +4,7 @@ description: > Installing CSI Driver for Unity via Helm --- -The CSI Driver for Dell EMC Unity can be deployed by using the provided Helm v3 charts and installation scripts on both Kubernetes and OpenShift platforms. For more detailed information on the installation scripts, review the script [documentation](https://github.com/dell/csi-unity/tree/master/dell-csi-helm-installer). +The CSI Driver for Dell Unity can be deployed by using the provided Helm v3 charts and installation scripts on both Kubernetes and OpenShift platforms. For more detailed information on the installation scripts, review the script [documentation](https://github.com/dell/csi-unity/tree/master/dell-csi-helm-installer). The controller section of the Helm chart installs the following components in a _Deployment_: @@ -13,6 +13,7 @@ The controller section of the Helm chart installs the following components in a - Kubernetes External Attacher, which attaches the volumes to the containers - Kubernetes External Snapshotter, which provides snapshot support - Kubernetes External Resizer, which resizes the volume +- Kubernetes External Health Monitor, which provides volume health status The node section of the Helm chart installs the following component in a _DaemonSet_: @@ -38,7 +39,7 @@ Install CSI Driver for Unity using this procedure. *Before you begin* - * You must have the downloaded files, including the Helm chart from the source [git repository](https://github.com/dell/csi-unity) with the command ```git clone -b v2.1.0 https://github.com/dell/csi-unity.git```, as a pre-requisite for running this procedure. + * You must have the downloaded files, including the Helm chart from the source [git repository](https://github.com/dell/csi-unity) with the command ```git clone -b v2.2.0 https://github.com/dell/csi-unity.git```, as a pre-requisite for running this procedure. * In the top-level dell-csi-helm-installer directory, there should be two scripts, `csi-install.sh` and `csi-uninstall.sh`. * Ensure _unity_ namespace exists in Kubernetes cluster. Use the `kubectl create namespace unity` command to create the namespace if the namespace is not present. @@ -51,6 +52,8 @@ Procedure **Note**: * ArrayId corresponds to the serial number of Unity array. * Unity Array username must have role as Storage Administrator to be able to perform CRUD operations. + * If the user is using complex K8s version like "v1.21.3-mirantis-1", use below kubeVersion check in helm/csi-unity/Chart.yaml file. + kubeVersion: ">= 1.21.0-0 < 1.24.0-0" 2. Copy the `helm/csi-unity/values.yaml` into a file named `myvalues.yaml` in the same directory of `csi-install.sh`, to customize settings for installation. @@ -64,12 +67,13 @@ Procedure | logLevel | LogLevel is used to set the logging level of the driver | true | info | | allowRWOMultiPodAccess | Flag to enable multiple pods to use the same PVC on the same node with RWO access mode. | false | false | | kubeletConfigDir | Specify kubelet config dir path | Yes | /var/lib/kubelet | - | syncNodeInfoInterval | Time interval to add node info to the array. Default 15 minutes. The minimum value should be 1 minute. | false | 15 | + | syncNodeInfoInterval | Time interval to add node info to the array. Default 15 minutes. The minimum value should be 1 minute. | false | 15 | | maxUnityVolumesPerNode | Maximum number of volumes that controller can publish to the node. | false | 0 | | certSecretCount | Represents the number of certificate secrets, which the user is going to create for SSL authentication. (unity-cert-0..unity-cert-n). The minimum value should be 1. | false | 1 | | imagePullPolicy | The default pull policy is IfNotPresent which causes the Kubelet to skip pulling an image if it already exists. | Yes | IfNotPresent | | podmon.enabled | service to monitor failing jobs and notify | false | - | | podmon.image| pod man image name | false | - | + | tenantName | Tenant name added while adding host entry to the array | No | | | **controller** | Allows configuration of the controller-specific parameters.| - | - | | controllerCount | Defines the number of csi-unity controller pods to deploy to the Kubernetes release| Yes | 2 | | volumeNamePrefix | Defines a string prefix for the names of PersistentVolumes created | Yes | "k8s" | @@ -78,13 +82,13 @@ Procedure | resizer.enabled | Enable/Disable volume expansion feature | Yes | true | | nodeSelector | Define node selection constraints for pods of controller deployment | No | | | tolerations | Define tolerations for the controller deployment, if required | No | | - | volumeHealthMonitor.enabled | Enable/Disable deployment of external health monitor sidecar for controller side volume health monitoring. | No | false | - | volumeHealthMonitor.interval | Interval of monitoring volume health condition. Allowed values: Number followed by unit (s,m,h) | No | 60s | + | healthMonitor.enabled | Enable/Disable deployment of external health monitor sidecar for controller side volume health monitoring. | No | false | + | healthMonitor.interval | Interval of monitoring volume health condition. Allowed values: Number followed by unit (s,m,h) | No | 60s | | ***node*** | Allows configuration of the node-specific parameters.| - | - | - | tolerations | Define tolerations for the node daemonset, if required | No | | | dnsPolicy | Define the DNS Policy of the Node service | Yes | ClusterFirstWithHostNet | - | volumeHealthMonitor.enabled | Enable/Disable health monitor of CSI volumes- volume usage, volume condition | No | false | - | tenantName | Tenant name added while adding host entry to the array | No | | + | healthMonitor.enabled | Enable/Disable health monitor of CSI volumes- volume usage, volume condition | No | false | + | nodeSelector | Define node selection constraints for pods of node deployment | No | | + | tolerations | Define tolerations for the node deployment, if required | No | | **Note**: @@ -118,19 +122,19 @@ Procedure maxUnityVolumesPerNode: 0 ``` -4. For certificate validation of Unisphere REST API calls refer [here](#certificate-validation-for-unisphere-rest-api-calls). Otherwise, create an empty secret with file `helm/emptysecret.yaml` file by running the `kubectl create -f helm/emptysecret.yaml` command. +4. For certificate validation of Unisphere REST API calls refer [here](#certificate-validation-for-unisphere-rest-api-calls). Otherwise, create an empty secret with file `csi-unity/samples/secret/emptysecret.yaml` file by running the `kubectl create -f csi-unity/samples/secret/emptysecret.yaml` command. 5. Prepare the `secret.yaml` for driver configuration. The following table lists driver configuration parameters for multiple storage arrays. - | Parameter | Description | Required | Default | - | --------- | ----------- | -------- |-------- | - | storageArrayList.username | Username for accessing Unity system | true | - | - | storageArrayList.password | Password for accessing Unity system | true | - | - | storageArrayList.endpoint | REST API gateway HTTPS endpoint Unity system| true | - | - | storageArrayList.arrayId | ArrayID for Unity system | true | - | + | Parameter | Description | Required | Default | + | ------------------------- | ----------------------------------- | -------- |-------- | + | storageArrayList.username | Username for accessing Unity system | true | - | + | storageArrayList.password | Password for accessing Unity system | true | - | + | storageArrayList.endpoint | REST API gateway HTTPS endpoint Unity system| true | - | + | storageArrayList.arrayId | ArrayID for Unity system | true | - | | storageArrayList.skipCertificateValidation | "skipCertificateValidation " determines if the driver is going to validate unisphere certs while connecting to the Unisphere REST API interface. If it is set to false, then a secret unity-certs has to be created with an X.509 certificate of CA which signed the Unisphere certificate. | true | true | - | storageArrayList.isDefault | An array having isDefault=true or isDefaultArray=true will be considered as the default array when arrayId is not specified in the storage class. This parameter should occur only once in the list. | false | false | + | storageArrayList.isDefault| An array having isDefault=true or isDefaultArray=true will be considered as the default array when arrayId is not specified in the storage class. This parameter should occur only once in the list. | true | - | Example: secret.yaml @@ -182,7 +186,7 @@ Procedure ``` **Note:** - * Parameters "allowRWOMultiPodAccess" and "syncNodeInfoTimeInterval" have been enabled for configuration in values.yaml and this helps users to dynamically change these values without the need for driver re-installation. + * Parameters "allowRWOMultiPodAccess" and "syncNodeInfoInterval" have been enabled for configuration in values.yaml and this helps users to dynamically change these values without the need for driver re-installation. 6. Setup for snapshots. @@ -197,19 +201,14 @@ Procedure In order to use the Kubernetes Volume Snapshot feature, you must ensure the following components have been deployed on your Kubernetes cluster #### Volume Snapshot CRD's - The Kubernetes Volume Snapshot CRDs can be obtained and installed from the external-snapshotter project on Github. Use [v4.2.x](https://github.com/kubernetes-csi/external-snapshotter/tree/v4.2.0/client/config/crd) for the installation. + The Kubernetes Volume Snapshot CRDs can be obtained and installed from the external-snapshotter project on Github. Use [v5.0.x](https://github.com/kubernetes-csi/external-snapshotter/tree/v5.0.1/client/config/crd) for the installation. #### Volume Snapshot Controller The CSI external-snapshotter sidecar is split into two controllers: - A common snapshot controller - A CSI external-snapshotter sidecar - Use [v4.2.x](https://github.com/kubernetes-csi/external-snapshotter/tree/v4.2.0/deploy/kubernetes/snapshot-controller) for the installation. - - **Note**: - - The manifests available on GitHub install the snapshotter image: - - [quay.io/k8scsi/csi-snapshotter:v4.0.x](https://quay.io/repository/k8scsi/csi-snapshotter?tag=v4.0.0&tab=tags) - - The CSI external-snapshotter sidecar is still installed along with the driver and does not involve any extra configuration. + Use [v5.0.x](https://github.com/kubernetes-csi/external-snapshotter/tree/v5.0.1/deploy/kubernetes/snapshot-controller) for the installation. #### Installation example @@ -218,12 +217,12 @@ Procedure git clone https://github.com/kubernetes-csi/external-snapshotter/ cd ./external-snapshotter git checkout release- - kubectl create -f client/config/crd - kubectl create -f deploy/kubernetes/snapshot-controller + kubectl kustomize client/config/crd | kubectl create -f - + kubectl -n kube-system kustomize deploy/kubernetes/snapshot-controller | kubectl create -f - ``` **Note**: - - It is recommended to use 4.2.x version of snapshotter/snapshot-controller. + - It is recommended to use 5.0.x version of snapshotter/snapshot-controller. - The CSI external-snapshotter sidecar is still installed along with the driver and does not involve any extra configuration. @@ -233,7 +232,7 @@ Procedure A successful installation must display messages that look similar to the following samples: ``` ------------------------------------------------------ - > Installing CSI Driver: csi-unity on 1.20 + > Installing CSI Driver: csi-unity on 1.22 ------------------------------------------------------ ------------------------------------------------------ > Checking to see if CSI Driver is already installed @@ -241,52 +240,52 @@ Procedure ------------------------------------------------------ > Verifying Kubernetes and driver configuration ------------------------------------------------------ - |- Kubernetes Version: 1.20 + |- Kubernetes Version: 1.22 | |- Driver: csi-unity | - |- Verifying Kubernetes versions - | - |--> Verifying minimum Kubernetes version Success - | - |--> Verifying maximum Kubernetes version Success + |- Verifying Kubernetes version | - |- Verifying that required namespaces have been created Success + |--> Verifying minimum Kubernetes version Success | - |- Verifying that required secrets have been created Success + |--> Verifying maximum Kubernetes version Success | - |- Verifying that required secrets have been created Success + |- Verifying that required namespaces have been created Success + | + |- Verifying that required secrets have been created Success + | + |- Verifying that optional secrets have been created Success | |- Verifying alpha snapshot resources - | - |--> Verifying that alpha snapshot CRDs are not installed Success + | + |--> Verifying that alpha snapshot CRDs are not installed Success | |- Verifying sshpass installation.. | |- Verifying iSCSI installation - Enter the root password of 10.**.**.**: + Enter the root password of 10.**.**.**: - Enter the root password of 10.**.**.**: + Enter the root password of 10.**.**.**: Success | |- Verifying snapshot support - | - |--> Verifying that snapshot CRDs are available Success - | - |--> Verifying that the snapshot controller is available Success | - |- Verifying helm version Success + |--> Verifying that snapshot CRDs are available Success + | + |--> Verifying that the snapshot controller is available Success | - |- Verifying helm values version Success + |- Verifying helm version Success + | + |- Verifying helm values version Success ------------------------------------------------------ > Verification Complete - Success ------------------------------------------------------ | - |- Installing Driver Success - | - |--> Waiting for Deployment unity-controller to be ready Success - | - |--> Waiting for DaemonSet unity-node to be ready Success + |- Installing Driver Success + | + |--> Waiting for Deployment unity-controller to be ready Success + | + |--> Waiting for DaemonSet unity-node to be ready Success ------------------------------------------------------ > Operation complete ------------------------------------------------------ @@ -301,7 +300,7 @@ Procedure ## Certificate validation for Unisphere REST API calls -This topic provides details about setting up the certificate validation for the CSI Driver for Dell EMC Unity. +This topic provides details about setting up the certificate validation for the CSI Driver for Dell Unity. *Before you begin* @@ -339,11 +338,11 @@ For CSI Driver for Unity version 1.6 and later, `dell-csi-helm-installer` does n ### What happens to my existing Volume Snapshot Classes? -*Upgrading from CSI Unity v2.0 driver*: +*Upgrading from CSI Unity v2.1 driver*: The existing volume snapshot class will be retained. *Upgrading from an older version of the driver*: -It is strongly recommended to upgrade the earlier versions of CSI Unity to 1.6 or higher, before upgrading to 2.1. +It is strongly recommended to upgrade the earlier versions of CSI Unity to 1.6 or higher, before upgrading to 2.2. ## Storage Classes @@ -360,7 +359,7 @@ Upgrading from an older version of the driver: The storage classes will be delet >Note: If you continue to use the old storage classes, you may not be able to take advantage of any new storage class parameter supported by the driver. **Steps to create storage class:** -There are samples storage class yaml files available under `helm/samples/storageclass`. These can be copied and modified as needed. +There are samples storage class yaml files available under `csi-unity/samples/storageclass`. These can be copied and modified as needed. 1. Pick any of `unity-fc.yaml`, `unity-iscsi.yaml` or `unity-nfs.yaml` 2. Copy the file as `unity--fc.yaml`, `unity--iscsi.yaml` or `unity--nfs.yaml` diff --git a/content/v3/csidriver/installation/offline/_index.md b/content/v3/csidriver/installation/offline/_index.md index a6dd5941fa..59a7c082f3 100644 --- a/content/v3/csidriver/installation/offline/_index.md +++ b/content/v3/csidriver/installation/offline/_index.md @@ -1,10 +1,10 @@ --- -title: Offline Installation of Dell EMC CSI Storage Providers +title: Offline Installation of Dell CSI Storage Providers linktitle: Offline Installer -description: Offline Installation of Dell EMC CSI Storage Providers +description: Offline Installation of Dell CSI Storage Providers --- -The `csi-offline-bundle.sh` script can be used to create a package usable for offline installation of the Dell EMC CSI Storage Providers, via either Helm +The `csi-offline-bundle.sh` script can be used to create a package usable for offline installation of the Dell CSI Storage Providers, via either Helm or the Dell CSI Operator. This includes the following drivers: @@ -43,6 +43,8 @@ To perform an offline installation of a driver or the Operator, the following st 2. Unpacking the offline bundle created in Step 1 and preparing for installation 3. Perform either a Helm installation or Operator installation using the files obtained after unpacking in Step 2 +**NOTE:** It is recommended to use the same build tool for packing and unpacking of images (either docker or podman). + ### Building an offline bundle This needs to be performed on a Linux system with access to the internet as a git repo will need to be cloned, and container images pulled from public registries. @@ -63,84 +65,73 @@ The resulting offline bundle file can be copied to another machine, if necessary For example, here is the output of a request to build an offline bundle for the Dell CSI Operator: ``` -[user@anothersystem /home/user]# git clone https://github.com/dell/dell-csi-operator.git +git clone https://github.com/dell/dell-csi-operator.git ``` ``` -[user@anothersystem /home/user]# cd dell-csi-operator +cd dell-csi-operator ``` ``` -[user@system /home/user/dell-csi-operator]# scripts/csi-offline-bundle.sh -c -* -* Building image manifest file +[root@user scripts]# ./csi-offline-bundle.sh -c * -* Pulling container images - - dellemc/csi-isilon:v1.4.0.000R - dellemc/csi-isilon:v1.5.0 - dellemc/csi-isilon:v1.6.0 - dellemc/csipowermax-reverseproxy:v1.3.0 - dellemc/csi-powermax:v1.5.0.000R - dellemc/csi-powermax:v1.6.0 - dellemc/csi-powermax:v1.7.0 - dellemc/csi-powerstore:v1.2.0.000R - dellemc/csi-powerstore:v1.3.0 - dellemc/csi-powerstore:v1.4.0 - dellemc/csi-unity:v1.4.0.000R - dellemc/csi-unity:v1.5.0 - dellemc/csi-unity:v1.6.0 - dellemc/csi-vxflexos:v1.3.0.000R - dellemc/csi-vxflexos:v1.4.0 - dellemc/csi-vxflexos:v1.5.0 - dellemc/dell-csi-operator:v1.4.0 +* Pulling and saving container images + + dellemc/csi-isilon:v2.0.0 + dellemc/csi-isilon:v2.1.0 + dellemc/csipowermax-reverseproxy:v1.4.0 + dellemc/csi-powermax:v2.0.0 + dellemc/csi-powermax:v2.1.0 + dellemc/csi-powerstore:v2.0.0 + dellemc/csi-powerstore:v2.1.0 + dellemc/csi-unity:v2.0.0 + dellemc/csi-unity:v2.1.0 + localregistry:5028/csi-unity/csi-unity:20220303110841 + dellemc/csi-vxflexos:v2.0.0 + dellemc/csi-vxflexos:v2.1.0 + localregistry:5035/csi-operator/dell-csi-operator:v1.7.0 dellemc/sdc:3.5.1.1 dellemc/sdc:3.5.1.1-1 + dellemc/sdc:3.6 docker.io/busybox:1.32.0 - k8s.gcr.io/sig-storage/csi-attacher:v3.0.0 - k8s.gcr.io/sig-storage/csi-attacher:v3.1.0 - k8s.gcr.io/sig-storage/csi-attacher:v3.2.1 - k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.0.1 - k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.1.0 - k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.2.0 - k8s.gcr.io/sig-storage/csi-provisioner:v2.0.2 - k8s.gcr.io/sig-storage/csi-provisioner:v2.1.0 - k8s.gcr.io/sig-storage/csi-provisioner:v2.2.1 - k8s.gcr.io/sig-storage/csi-resizer:v1.2.0 - k8s.gcr.io/sig-storage/csi-snapshotter:v3.0.2 - k8s.gcr.io/sig-storage/csi-snapshotter:v3.0.3 - k8s.gcr.io/sig-storage/csi-snapshotter:v4.0.0 - k8s.gcr.io/sig-storage/csi-snapshotter:v4.1.0 - quay.io/k8scsi/csi-resizer:v1.0.0 - quay.io/k8scsi/csi-resizer:v1.1.0 - -* -* Saving images + ... + ... * * Copying necessary files - /dell/git/dell-csi-operator/config - /dell/git/dell-csi-operator/deploy - /dell/git/dell-csi-operator/samples - /dell/git/dell-csi-operator/scripts - /dell/git/dell-csi-operator/README.md - /dell/git/dell-csi-operator/LICENSE + /root/dell-csi-operator/driverconfig + /root/dell-csi-operator/deploy + /root/dell-csi-operator/samples + /root/dell-csi-operator/scripts + /root/dell-csi-operator/OLM.md + /root/dell-csi-operator/README.md + /root/dell-csi-operator/LICENSE * * Compressing release -dell-csi-operator-bundle/ -dell-csi-operator-bundle/samples/ -... --... -dell-csi-operator-bundle/LICENSE -dell-csi-operator-bundle/README.md + dell-csi-operator-bundle/ + dell-csi-operator-bundle/driverconfig/ + dell-csi-operator-bundle/driverconfig/config.yaml + dell-csi-operator-bundle/driverconfig/isilon_v200_v119.json + dell-csi-operator-bundle/driverconfig/isilon_v200_v120.json + dell-csi-operator-bundle/driverconfig/isilon_v200_v121.json + dell-csi-operator-bundle/driverconfig/isilon_v200_v122.json + dell-csi-operator-bundle/driverconfig/isilon_v210_v120.json + dell-csi-operator-bundle/driverconfig/isilon_v210_v121.json + dell-csi-operator-bundle/driverconfig/isilon_v210_v122.json + dell-csi-operator-bundle/driverconfig/isilon_v220_v121.json + dell-csi-operator-bundle/driverconfig/isilon_v220_v122.json + dell-csi-operator-bundle/driverconfig/isilon_v220_v123.json + dell-csi-operator-bundle/driverconfig/powermax_v200_v119.json + ... + ... * * Complete -Offline bundle file is: /dell/git/dell-csi-operator/dell-csi-operator-bundle.tar.gz +Offline bundle file is: /root/dell-csi-operator/dell-csi-operator-bundle.tar.gz + ``` ### Unpacking the offline bundle and preparing for installation @@ -161,7 +152,7 @@ The script will then perform the following steps: An example of preparing the bundle for installation (192.168.75.40:5000 refers to an image registry accessible to Kubernetes/OpenShift): ``` -[user@anothersystem /tmp]# tar xvfz dell-csi-operator-bundle.tar.gz +tar xvfz dell-csi-operator-bundle.tar.gz dell-csi-operator-bundle/ dell-csi-operator-bundle/samples/ ... @@ -171,99 +162,87 @@ dell-csi-operator-bundle/LICENSE dell-csi-operator-bundle/README.md ``` ``` -[user@anothersystem /tmp]# cd dell-csi-operator-bundle +cd dell-csi-operator-bundle ``` ``` -[user@anothersystem /tmp/dell-csi-operator-bundle]# scripts/csi-offline-bundle.sh -p -r 192.168.75.40:5000/operator -Preparing an offline bundle for installation +[root@user scripts]# ./csi-offline-bundle.sh -p -r localregistry:5000/csi-operator +Preparing a offline bundle for installation * * Loading docker images + 5b1fa8e3e100: Loading layer [==================================================>] 3.697MB/3.697MB + e20ed4c73206: Loading layer [==================================================>] 17.22MB/17.22MB + Loaded image: k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.5.0 + d72a74c56330: Loading layer [==================================================>] 3.031MB/3.031MB + f2d2ab12e2a7: Loading layer [==================================================>] 48.08MB/48.08MB + Loaded image: k8s.gcr.io/sig-storage/csi-provisioner:v2.0.2 + 417cb9b79ade: Loading layer [==================================================>] 3.062MB/3.062MB + 61fefb35ccee: Loading layer [==================================================>] 16.88MB/16.88MB + Loaded image: k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.2.0 + 7a5b9c0b4b14: Loading layer [==================================================>] 3.031MB/3.031MB + 1555ad6e2d44: Loading layer [==================================================>] 49.86MB/49.86MB + Loaded image: k8s.gcr.io/sig-storage/csi-provisioner:v2.1.0 + 2de1422d5d2d: Loading layer [==================================================>] 54.56MB/54.56MB + Loaded image: k8s.gcr.io/sig-storage/csi-provisioner:v2.2.1 + 25a1c1010608: Loading layer [==================================================>] 54.54MB/54.54MB + Loaded image: k8s.gcr.io/sig-storage/csi-provisioner:v2.2.2 + 07363fa84210: Loading layer [==================================================>] 3.062MB/3.062MB + 5227e51ea570: Loading layer [==================================================>] 54.92MB/54.92MB + Loaded image: k8s.gcr.io/sig-storage/csi-provisioner:v3.0.0 + cfb5cbeabdb2: Loading layer [==================================================>] 55.38MB/55.38MB + Loaded image: k8s.gcr.io/sig-storage/csi-provisioner:v3.1.0 + ... + ... * * Tagging and pushing images - dellemc/csi-isilon:v1.4.0.000R -> amaas-eos-mw1.cec.lab.emc.com:5028/csi-operator/csi-isilon:v1.4.0.000R - dellemc/csi-isilon:v1.5.0 -> amaas-eos-mw1.cec.lab.emc.com:5028/csi-operator/csi-isilon:v1.5.0 - dellemc/csi-isilon:v1.6.0 -> amaas-eos-mw1.cec.lab.emc.com:5028/csi-operator/csi-isilon:v1.6.0 - dellemc/csipowermax-reverseproxy:v1.3.0 -> amaas-eos-mw1.cec.lab.emc.com:5028/csi-operator/csipowermax-reverseproxy:v1.3.0 - dellemc/csi-powermax:v1.5.0.000R -> amaas-eos-mw1.cec.lab.emc.com:5028/csi-operator/csi-powermax:v1.5.0.000R - dellemc/csi-powermax:v1.6.0 -> amaas-eos-mw1.cec.lab.emc.com:5028/csi-operator/csi-powermax:v1.6.0 - dellemc/csi-powermax:v1.7.0 -> amaas-eos-mw1.cec.lab.emc.com:5028/csi-operator/csi-powermax:v1.7.0 - dellemc/csi-powerstore:v1.2.0.000R -> amaas-eos-mw1.cec.lab.emc.com:5028/csi-operator/csi-powerstore:v1.2.0.000R - dellemc/csi-powerstore:v1.3.0 -> amaas-eos-mw1.cec.lab.emc.com:5028/csi-operator/csi-powerstore:v1.3.0 - dellemc/csi-powerstore:v1.4.0 -> amaas-eos-mw1.cec.lab.emc.com:5028/csi-operator/csi-powerstore:v1.4.0 - dellemc/csi-unity:v1.4.0.000R -> amaas-eos-mw1.cec.lab.emc.com:5028/csi-operator/csi-unity:v1.4.0.000R - dellemc/csi-unity:v1.5.0 -> amaas-eos-mw1.cec.lab.emc.com:5028/csi-operator/csi-unity:v1.5.0 - dellemc/csi-unity:v1.6.0 -> amaas-eos-mw1.cec.lab.emc.com:5028/csi-operator/csi-unity:v1.6.0 - dellemc/csi-vxflexos:v1.3.0.000R -> amaas-eos-mw1.cec.lab.emc.com:5028/csi-operator/csi-vxflexos:v1.3.0.000R - dellemc/csi-vxflexos:v1.4.0 -> amaas-eos-mw1.cec.lab.emc.com:5028/csi-operator/csi-vxflexos:v1.4.0 - dellemc/csi-vxflexos:v1.5.0 -> amaas-eos-mw1.cec.lab.emc.com:5028/csi-operator/csi-vxflexos:v1.5.0 - dellemc/dell-csi-operator:v1.4.0 -> amaas-eos-mw1.cec.lab.emc.com:5028/csi-operator/dell-csi-operator:v1.4.0 - dellemc/sdc:3.5.1.1 -> amaas-eos-mw1.cec.lab.emc.com:5028/csi-operator/sdc:3.5.1.1 - dellemc/sdc:3.5.1.1-1 -> amaas-eos-mw1.cec.lab.emc.com:5028/csi-operator/sdc:3.5.1.1-1 - docker.io/busybox:1.32.0 -> amaas-eos-mw1.cec.lab.emc.com:5028/csi-operator/busybox:1.32.0 - k8s.gcr.io/sig-storage/csi-attacher:v3.0.0 -> amaas-eos-mw1.cec.lab.emc.com:5028/csi-operator/csi-attacher:v3.0.0 - k8s.gcr.io/sig-storage/csi-attacher:v3.1.0 -> amaas-eos-mw1.cec.lab.emc.com:5028/csi-operator/csi-attacher:v3.1.0 - k8s.gcr.io/sig-storage/csi-attacher:v3.2.1 -> amaas-eos-mw1.cec.lab.emc.com:5028/csi-operator/csi-attacher:v3.2.1 - k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.0.1 -> amaas-eos-mw1.cec.lab.emc.com:5028/csi-operator/csi-node-driver-registrar:v2.0.1 - k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.1.0 -> amaas-eos-mw1.cec.lab.emc.com:5028/csi-operator/csi-node-driver-registrar:v2.1.0 - k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.2.0 -> amaas-eos-mw1.cec.lab.emc.com:5028/csi-operator/csi-node-driver-registrar:v2.2.0 - k8s.gcr.io/sig-storage/csi-provisioner:v2.0.2 -> amaas-eos-mw1.cec.lab.emc.com:5028/csi-operator/csi-provisioner:v2.0.2 - k8s.gcr.io/sig-storage/csi-provisioner:v2.1.0 -> amaas-eos-mw1.cec.lab.emc.com:5028/csi-operator/csi-provisioner:v2.1.0 - k8s.gcr.io/sig-storage/csi-provisioner:v2.2.1 -> amaas-eos-mw1.cec.lab.emc.com:5028/csi-operator/csi-provisioner:v2.2.1 - k8s.gcr.io/sig-storage/csi-resizer:v1.2.0 -> amaas-eos-mw1.cec.lab.emc.com:5028/csi-operator/csi-resizer:v1.2.0 - k8s.gcr.io/sig-storage/csi-snapshotter:v3.0.2 -> amaas-eos-mw1.cec.lab.emc.com:5028/csi-operator/csi-snapshotter:v3.0.2 - k8s.gcr.io/sig-storage/csi-snapshotter:v3.0.3 -> amaas-eos-mw1.cec.lab.emc.com:5028/csi-operator/csi-snapshotter:v3.0.3 - k8s.gcr.io/sig-storage/csi-snapshotter:v4.0.0 -> amaas-eos-mw1.cec.lab.emc.com:5028/csi-operator/csi-snapshotter:v4.0.0 - k8s.gcr.io/sig-storage/csi-snapshotter:v4.1.0 -> amaas-eos-mw1.cec.lab.emc.com:5028/csi-operator/csi-snapshotter:v4.1.0 - quay.io/k8scsi/csi-resizer:v1.0.0 -> amaas-eos-mw1.cec.lab.emc.com:5028/csi-operator/csi-resizer:v1.0.0 - quay.io/k8scsi/csi-resizer:v1.1.0 -> amaas-eos-mw1.cec.lab.emc.com:5028/csi-operator/csi-resizer:v1.1.0 + localregistry:5035/csi-operator/dell-csi-operator:v1.7.0 -> localregistry:5000/csi-operator/dell-csi-operator:v1.7.0 + dellemc/csi-isilon:v2.0.0 -> localregistry:5000/csi-operator/csi-isilon:v2.0.0 + dellemc/csi-isilon:v2.1.0 -> localregistry:5000/csi-operator/csi-isilon:v2.1.0 + dellemc/csipowermax-reverseproxy:v1.4.0 -> localregistry:5000/csi-operator/csipowermax-reverseproxy:v1.4.0 + dellemc/csi-powermax:v2.0.0 -> localregistry:5000/csi-operator/csi-powermax:v2.0.0 + dellemc/csi-powermax:v2.1.0 -> localregistry:5000/csi-operator/csi-powermax:v2.1.0 + dellemc/csi-powerstore:v2.0.0 -> localregistry:5000/csi-operator/csi-powerstore:v2.0.0 + dellemc/csi-powerstore:v2.1.0 -> localregistry:5000/csi-operator/csi-powerstore:v2.1.0 + dellemc/csi-unity:nightly -> localregistry:5000/csi-operator/csi-unity:nightly + dellemc/csi-unity:v2.0.0 -> localregistry:5000/csi-operator/csi-unity:v2.0.0 + dellemc/csi-unity:v2.1.0 -> localregistry:5000/csi-operator/csi-unity:v2.1.0 + dellemc/csi-vxflexos:v2.0.0 -> localregistry:5000/csi-operator/csi-vxflexos:v2.0.0 + dellemc/csi-vxflexos:v2.1.0 -> localregistry:5000/csi-operator/csi-vxflexos:v2.1.0 + dellemc/sdc:3.5.1.1 -> localregistry:5000/csi-operator/sdc:3.5.1.1 + dellemc/sdc:3.5.1.1-1 -> localregistry:5000/csi-operator/sdc:3.5.1.1-1 + dellemc/sdc:3.6 -> localregistry:5000/csi-operator/sdc:3.6 + docker.io/busybox:1.32.0 -> localregistry:5000/csi-operator/busybox:1.32.0 + ... + ... * -* Preparing operator files within /tmp/dell-csi-operator-bundle - - changing: dellemc/csi-isilon:v1.4.0.000R -> amaas-eos-mw1.cec.lab.emc.com:5028/csi-operator/csi-isilon:v1.4.0.000R - changing: dellemc/csi-isilon:v1.5.0 -> amaas-eos-mw1.cec.lab.emc.com:5028/csi-operator/csi-isilon:v1.5.0 - changing: dellemc/csi-isilon:v1.6.0 -> amaas-eos-mw1.cec.lab.emc.com:5028/csi-operator/csi-isilon:v1.6.0 - changing: dellemc/csipowermax-reverseproxy:v1.3.0 -> amaas-eos-mw1.cec.lab.emc.com:5028/csi-operator/csipowermax-reverseproxy:v1.3.0 - changing: dellemc/csi-powermax:v1.5.0.000R -> amaas-eos-mw1.cec.lab.emc.com:5028/csi-operator/csi-powermax:v1.5.0.000R - changing: dellemc/csi-powermax:v1.6.0 -> amaas-eos-mw1.cec.lab.emc.com:5028/csi-operator/csi-powermax:v1.6.0 - changing: dellemc/csi-powermax:v1.7.0 -> amaas-eos-mw1.cec.lab.emc.com:5028/csi-operator/csi-powermax:v1.7.0 - changing: dellemc/csi-powerstore:v1.2.0.000R -> amaas-eos-mw1.cec.lab.emc.com:5028/csi-operator/csi-powerstore:v1.2.0.000R - changing: dellemc/csi-powerstore:v1.3.0 -> amaas-eos-mw1.cec.lab.emc.com:5028/csi-operator/csi-powerstore:v1.3.0 - changing: dellemc/csi-powerstore:v1.4.0 -> amaas-eos-mw1.cec.lab.emc.com:5028/csi-operator/csi-powerstore:v1.4.0 - changing: dellemc/csi-unity:v1.4.0.000R -> amaas-eos-mw1.cec.lab.emc.com:5028/csi-operator/csi-unity:v1.4.0.000R - changing: dellemc/csi-unity:v1.5.0 -> amaas-eos-mw1.cec.lab.emc.com:5028/csi-operator/csi-unity:v1.5.0 - changing: dellemc/csi-unity:v1.6.0 -> amaas-eos-mw1.cec.lab.emc.com:5028/csi-operator/csi-unity:v1.6.0 - changing: dellemc/csi-vxflexos:v1.3.0.000R -> amaas-eos-mw1.cec.lab.emc.com:5028/csi-operator/csi-vxflexos:v1.3.0.000R - changing: dellemc/csi-vxflexos:v1.4.0 -> amaas-eos-mw1.cec.lab.emc.com:5028/csi-operator/csi-vxflexos:v1.4.0 - changing: dellemc/csi-vxflexos:v1.5.0 -> amaas-eos-mw1.cec.lab.emc.com:5028/csi-operator/csi-vxflexos:v1.5.0 - changing: dellemc/dell-csi-operator:v1.4.0 -> amaas-eos-mw1.cec.lab.emc.com:5028/csi-operator/dell-csi-operator:v1.4.0 - changing: dellemc/sdc:3.5.1.1 -> amaas-eos-mw1.cec.lab.emc.com:5028/csi-operator/sdc:3.5.1.1 - changing: dellemc/sdc:3.5.1.1-1 -> amaas-eos-mw1.cec.lab.emc.com:5028/csi-operator/sdc:3.5.1.1-1 - changing: docker.io/busybox:1.32.0 -> amaas-eos-mw1.cec.lab.emc.com:5028/csi-operator/busybox:1.32.0 - changing: k8s.gcr.io/sig-storage/csi-attacher:v3.0.0 -> amaas-eos-mw1.cec.lab.emc.com:5028/csi-operator/csi-attacher:v3.0.0 - changing: k8s.gcr.io/sig-storage/csi-attacher:v3.1.0 -> amaas-eos-mw1.cec.lab.emc.com:5028/csi-operator/csi-attacher:v3.1.0 - changing: k8s.gcr.io/sig-storage/csi-attacher:v3.2.1 -> amaas-eos-mw1.cec.lab.emc.com:5028/csi-operator/csi-attacher:v3.2.1 - changing: k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.0.1 -> amaas-eos-mw1.cec.lab.emc.com:5028/csi-operator/csi-node-driver-registrar:v2.0.1 - changing: k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.1.0 -> amaas-eos-mw1.cec.lab.emc.com:5028/csi-operator/csi-node-driver-registrar:v2.1.0 - changing: k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.2.0 -> amaas-eos-mw1.cec.lab.emc.com:5028/csi-operator/csi-node-driver-registrar:v2.2.0 - changing: k8s.gcr.io/sig-storage/csi-provisioner:v2.0.2 -> amaas-eos-mw1.cec.lab.emc.com:5028/csi-operator/csi-provisioner:v2.0.2 - changing: k8s.gcr.io/sig-storage/csi-provisioner:v2.1.0 -> amaas-eos-mw1.cec.lab.emc.com:5028/csi-operator/csi-provisioner:v2.1.0 - changing: k8s.gcr.io/sig-storage/csi-provisioner:v2.2.1 -> amaas-eos-mw1.cec.lab.emc.com:5028/csi-operator/csi-provisioner:v2.2.1 - changing: k8s.gcr.io/sig-storage/csi-resizer:v1.2.0 -> amaas-eos-mw1.cec.lab.emc.com:5028/csi-operator/csi-resizer:v1.2.0 - changing: k8s.gcr.io/sig-storage/csi-snapshotter:v3.0.2 -> amaas-eos-mw1.cec.lab.emc.com:5028/csi-operator/csi-snapshotter:v3.0.2 - changing: k8s.gcr.io/sig-storage/csi-snapshotter:v3.0.3 -> amaas-eos-mw1.cec.lab.emc.com:5028/csi-operator/csi-snapshotter:v3.0.3 - changing: k8s.gcr.io/sig-storage/csi-snapshotter:v4.0.0 -> amaas-eos-mw1.cec.lab.emc.com:5028/csi-operator/csi-snapshotter:v4.0.0 - changing: k8s.gcr.io/sig-storage/csi-snapshotter:v4.1.0 -> amaas-eos-mw1.cec.lab.emc.com:5028/csi-operator/csi-snapshotter:v4.1.0 - changing: quay.io/k8scsi/csi-resizer:v1.0.0 -> amaas-eos-mw1.cec.lab.emc.com:5028/csi-operator/csi-resizer:v1.0.0 - changing: quay.io/k8scsi/csi-resizer:v1.1.0 -> amaas-eos-mw1.cec.lab.emc.com:5028/csi-operator/csi-resizer:v1.1.0 - +* Preparing operator files within /root/dell-csi-operator-bundle + + changing: localregistry:5000/csi-operator/dell-csi-operator:v1.7.0 -> localregistry:5000/csi-operator/dell-csi-operator:v1.7.0 + changing: dellemc/csi-isilon:v2.0.0 -> localregistry:5000/csi-operator/csi-isilon:v2.0.0 + changing: dellemc/csi-isilon:v2.1.0 -> localregistry:5000/csi-operator/csi-isilon:v2.1.0 + changing: dellemc/csipowermax-reverseproxy:v1.4.0 -> localregistry:5000/csi-operator/csipowermax-reverseproxy:v1.4.0 + changing: dellemc/csi-powermax:v2.0.0 -> localregistry:5000/csi-operator/csi-powermax:v2.0.0 + changing: dellemc/csi-powermax:v2.1.0 -> localregistry:5000/csi-operator/csi-powermax:v2.1.0 + changing: dellemc/csi-powerstore:v2.0.0 -> localregistry:5000/csi-operator/csi-powerstore:v2.0.0 + changing: dellemc/csi-powerstore:v2.1.0 -> localregistry:5000/csi-operator/csi-powerstore:v2.1.0 + changing: dellemc/csi-unity:nightly -> localregistry:5000/csi-operator/csi-unity:nightly + changing: dellemc/csi-unity:v2.0.0 -> localregistry:5000/csi-operator/csi-unity:v2.0.0 + changing: dellemc/csi-unity:v2.1.0 -> localregistry:5000/csi-operator/csi-unity:v2.1.0 + changing: dellemc/csi-vxflexos:v2.0.0 -> localregistry:5000/csi-operator/csi-vxflexos:v2.0.0 + changing: dellemc/csi-vxflexos:v2.1.0 -> localregistry:5000/csi-operator/csi-vxflexos:v2.1.0 + changing: dellemc/sdc:3.5.1.1 -> localregistry:5000/csi-operator/sdc:3.5.1.1 + changing: dellemc/sdc:3.5.1.1-1 -> localregistry:5000/csi-operator/sdc:3.5.1.1-1 + changing: dellemc/sdc:3.6 -> localregistry:5000/csi-operator/sdc:3.6 + changing: docker.io/busybox:1.32.0 -> localregistry:5000/csi-operator/busybox:1.32.0 + ... + ... + * * Complete - ``` ### Perform either a Helm installation or Operator installation diff --git a/content/v3/csidriver/installation/operator/_index.md b/content/v3/csidriver/installation/operator/_index.md index 468761f0f6..71140cd643 100644 --- a/content/v3/csidriver/installation/operator/_index.md +++ b/content/v3/csidriver/installation/operator/_index.md @@ -1,28 +1,28 @@ --- -title: "Dell CSI Operator Installation Process" +title: "CSI Driver installation using Dell CSI Operator" linkTitle: "Using Operator" weight: 4 description: > Installation of CSI drivers using Dell CSI Operator --- -The Dell CSI Operator is a Kubernetes Operator, which can be used to install and manage the CSI Drivers provided by Dell EMC for various storage platforms. This operator is available as a community operator for upstream Kubernetes and can be deployed using OperatorHub.io. It is also available as a certified operator for OpenShift clusters and can be deployed using the OpenShift Container Platform. Both these methods of installation use OLM (Operator Lifecycle Manager). The operator can also be deployed manually. +The Dell CSI Operator is a Kubernetes Operator, which can be used to install and manage the CSI Drivers provided by Dell for various storage platforms. This operator is available as a community operator for upstream Kubernetes and can be deployed using OperatorHub.io. It is also available as a certified operator for OpenShift clusters and can be deployed using the OpenShift Container Platform. Both these methods of installation use OLM (Operator Lifecycle Manager). The operator can also be deployed manually. ## Prerequisites #### Volume Snapshot CRD's -The Kubernetes Volume Snapshot CRDs can be obtained and installed from the external-snapshotter project on Github. Manifests are available here:[v4.2.x](https://github.com/kubernetes-csi/external-snapshotter/tree/v4.2.0/client/config/crd) +The Kubernetes Volume Snapshot CRDs can be obtained and installed from the external-snapshotter project on Github. Manifests are available here:[v5.0.x](https://github.com/kubernetes-csi/external-snapshotter/tree/v5.0.1/client/config/crd) #### Volume Snapshot Controller The CSI external-snapshotter sidecar is split into two controllers: - A common snapshot controller - A CSI external-snapshotter sidecar -The common snapshot controller must be installed only once in the cluster irrespective of the number of CSI drivers installed in the cluster. On OpenShift clusters 4.4 and later, the common snapshot-controller is pre-installed. In the clusters where it is not present, it can be installed using `kubectl` and the manifests are available here: [v4.2.x](https://github.com/kubernetes-csi/external-snapshotter/tree/v4.2.0/deploy/kubernetes/snapshot-controller) +The common snapshot controller must be installed only once in the cluster irrespective of the number of CSI drivers installed in the cluster. On OpenShift clusters 4.4 and later, the common snapshot-controller is pre-installed. In the clusters where it is not present, it can be installed using `kubectl` and the manifests are available here: [v5.0.x](https://github.com/kubernetes-csi/external-snapshotter/tree/v5.0.1/deploy/kubernetes/snapshot-controller) *NOTE:* - The manifests available on GitHub install the snapshotter image: - - [quay.io/k8scsi/csi-snapshotter:v4.0.x](https://quay.io/repository/k8scsi/csi-snapshotter?tag=v4.0.0&tab=tags) + - [quay.io/k8scsi/csi-snapshotter:v5.0.1](https://quay.io/repository/k8scsi/csi-snapshotter?tag=v5.0.1&tab=tags) - The CSI external-snapshotter sidecar is still installed along with the driver and does not involve any extra configuration. #### Installation example @@ -37,7 +37,7 @@ kubectl create -f deploy/kubernetes/snapshot-controller ``` *NOTE:* -- It is recommended to use 4.2.x version of snapshotter/snapshot-controller. +- It is recommended to use 5.0.x version of snapshotter/snapshot-controller. ## Installation @@ -50,21 +50,21 @@ If you have installed an old version of the `dell-csi-operator` which was availa #### Full list of CSI Drivers and versions supported by the Dell CSI Operator | CSI Driver | Version | ConfigVersion | Kubernetes Version | OpenShift Version | | ------------------ | --------- | -------------- | -------------------- | --------------------- | -| CSI PowerMax | 1.7 | v6 | 1.19, 1.20, 1.21 | 4.6, 4.7 | | CSI PowerMax | 2.0.0 | v2.0.0 | 1.20, 1.21, 1.22 | 4.6 EUS, 4.7, 4.8 | | CSI PowerMax | 2.1.0 | v2.1.0 | 1.20, 1.21, 1.22 | 4.8, 4.8 EUS, 4.9 | -| CSI PowerFlex | 1.5 | v5 | 1.19, 1.20, 1.21 | 4.6, 4.7 | +| CSI PowerMax | 2.2.0 | v2.2.0 | 1.21, 1.22, 1.23 | 4.8, 4.8 EUS, 4.9 | | CSI PowerFlex | 2.0.0 | v2.0.0 | 1.20, 1.21, 1.22 | 4.6 EUS, 4.7, 4.8 | | CSI PowerFlex | 2.1.0 | v2.1.0 | 1.20, 1.21, 1.22 | 4.8, 4.8 EUS, 4.9 | -| CSI PowerScale | 1.6 | v6 | 1.19, 1.20, 1.21 | 4.6, 4.7 | +| CSI PowerFlex | 2.2.0 | v2.2.0 | 1.21, 1.22, 1.23 | 4.8, 4.8 EUS, 4.9 | | CSI PowerScale | 2.0.0 | v2.0.0 | 1.20, 1.21, 1.22 | 4.6 EUS, 4.7, 4.8 | | CSI PowerScale | 2.1.0 | v2.1.0 | 1.20, 1.21, 1.22 | 4.8, 4.8 EUS, 4.9 | -| CSI Unity | 1.6 | v5 | 1.19, 1.20, 1.21 | 4.6, 4.7 | +| CSI PowerScale | 2.2.0 | v2.2.0 | 1.21, 1.22, 1.23 | 4.8, 4.8 EUS, 4.9 | | CSI Unity | 2.0.0 | v2.0.0 | 1.20, 1.21, 1.22 | 4.6 EUS, 4.7, 4.8 | | CSI Unity | 2.1.0 | v2.1.0 | 1.20, 1.21, 1.22 | 4.8, 4.8 EUS, 4.9 | -| CSI PowerStore | 1.4 | v4 | 1.19, 1.20, 1.21 | 4.6, 4.7 | +| CSI Unity | 2.2.0 | v2.2.0 | 1.21, 1.22, 1.23 | 4.8, 4.8 EUS, 4.9 | | CSI PowerStore | 2.0.0 | v2.0.0 | 1.20, 1.21, 1.22 | 4.6 EUS, 4.7, 4.8 | | CSI PowerStore | 2.1.0 | v2.1.0 | 1.20, 1.21, 1.22 | 4.8, 4.8 EUS, 4.9 | +| CSI PowerStore | 2.2.0 | v2.2.0 | 1.21, 1.22, 1.23 | 4.8, 4.8 EUS, 4.9 |
@@ -76,7 +76,7 @@ The installation process involves the creation of a `Subscription` object either * _Automatic_ - If you want the Operator to be automatically installed or upgraded (once an upgrade becomes available) * _Manual_ - If you want a Cluster Administrator to manually review and approve the `InstallPlan` for installation/upgrades -**NOTE**: The recommended version of OLM for upstream Kubernetes is **`v0.18.3`**. +**NOTE**: The recommended version of OLM for upstream Kubernetes is **`v0.18.2`**. #### Pre-Requisite for installation with OLM Please run the following commands for creating the required `ConfigMap` before installing the `dell-csi-operator` using OLM. @@ -98,8 +98,9 @@ $ kubectl create configmap dell-csi-operator-config --from-file config.tar.gz -n >**Skip step 1 for "offline bundle installation" and continue using the workspace created by untar of dell-csi-operator-bundle.tar.gz.** 1. Clone the [Dell CSI Operator repository](https://github.com/dell/dell-csi-operator). -2. git checkout dell-csi-operator- -3. Run `bash scripts/install.sh` to install the operator. +2. cd dell-csi-operator +3. git checkout dell-csi-operator-`your-version' +4. Run `bash scripts/install.sh` to install the operator. >NOTE: Dell CSI Operator version 1.4.0 and higher would install to the 'dell-csi-operator' namespace by default. Any existing installations of Dell CSI Operator (v1.2.0 or later) installed using `install.sh` to the 'default' or 'dell-csi-operator' namespace can be upgraded to the new version by running `install.sh --upgrade`. @@ -126,8 +127,7 @@ For installation of the supported drivers, a `CustomResource` has to be created ### Pre-requisites for upstream Kubernetes Clusters On upstream Kubernetes clusters, make sure to install * VolumeSnapshot CRDs - * On clusters running v1.20,v1.21 & v1.22, make sure to install v1 VolumeSnapshot CRDs - * On clusters running v1.19, make sure to install v1beta1 VolumeSnapshot CRDs + * On clusters running v1.21,v1.22 & v1.23, make sure to install v1 VolumeSnapshot CRDs * External Volume Snapshot Controller with the correct version ### Pre-requisites for Red Hat OpenShift Clusters @@ -210,36 +210,6 @@ Finally, you have to restart the service by providing the command For additional information refer to official documentation of the multipath configuration. -## Replacing CSI Operator with Dell CSI Operator -`Dell CSI Operator` was previously available, with the name `CSI Operator`, for both manual and OLM installation. -`CSI Operator` has been discontinued and has been renamed to `Dell CSI Operator`. This is just a name change and as a result, -the Kubernetes resources created as part of the Operator deployment will use the name `dell-csi-operator` instead of `csi-operator`. - -Before proceeding with the installation of the new `Dell CSI Operator`, any existing `CSI Operator` installation has to be completely -removed from the cluster. - -Note - This **doesn't** impact any of the CSI Drivers which have been installed in the cluster - -If the old `CSI Operator` was installed manually, then run the following command from the root of the repository which was used -originally for installation - - bash scripts/undeploy.sh - -If you don't have the original repository available, then run the following commands - - git clone https://github.com/dell/dell-csi-operator.git - cd dell-csi-operator - git checkout csi-operator-v1.0.0 - bash scripts/undeploy.sh - -Note - Once you have removed the old `CSI Operator`, then for installing the new `Dell CSI Operator`, you will need to pull/checkout the latest code - -If you had installed the old CSI Operator using OLM, then please follow the uninstallation instructions provided by OperatorHub. This will mostly involve: - - * Deleting the CSI Operator Subscription - * Deleting the CSI Operator CSV - - ## Installing CSI Driver via Operator CSI Drivers can be installed by creating a `CustomResource` object in your cluster. @@ -251,8 +221,8 @@ Or {driver name}_{driver version}_ops_{OpenShift version}.yaml For e.g. -* sample/powermax_v140_k8s_117.yaml* <- To install CSI PowerMax driver v1.4.0 on a Kubernetes 1.17 cluster -* sample/powermax_v140_ops_46.yaml* <- To install CSI PowerMax driver v1.4.0 on an OpenShift 4.6 cluster +* samples/powermax_v220_k8s_123.yaml* <- To install CSI PowerMax driver v2.2.0 on a Kubernetes 1.23 cluster +* samples/powermax_v220_ops_49.yaml* <- To install CSI PowerMax driver v2.2.0 on an OpenShift 4.9 cluster Copy the correct sample file and edit the mandatory & any optional parameters specific to your driver installation by following the instructions [here](#modify-the-driver-specification) >NOTE: A detailed explanation of the various mandatory and optional fields in the CustomResource is available [here](#custom-resource-specification). Please make sure to read through and understand the various fields. @@ -293,14 +263,19 @@ The CSI Drivers installed by the Dell CSI Operator can be updated like any Kuber # Replace driver-namespace with the namespace where the Unity driver is installed $ kubectl edit csiunity/unity -n ``` - and modify the installation -* Modify the API object in-place via `kubectl patch` + and modify the installation. The usual fields to edit are the version of drivers and sidecars and the env variables. +* Modify the API object in place via `kubectl patch` command. + +To create patch file or edit deployments, refer [here](https://github.com/dell/dell-csi-operator/tree/master/samples) for driver version & env variables and [here](https://github.com/dell/dell-csi-operator/tree/master/driverconfig/config.yaml) for version of side-cars. +The latest versions of drivers could have additional env variables or sidecars. + +The below notes explain some of the general items to take care of. **NOTES:** 1. If you are trying to upgrade the CSI driver from an older version, make sure to modify the _configVersion_ field if required. ```yaml driver: - configVersion: v2.1.0 + configVersion: v2.2.0 ``` 2. Volume Health Monitoring feature is optional and by default this feature is disabled for drivers when installed via operator. To enable this feature, we will have to modify the below block while upgrading the driver.To get the volume health state add @@ -310,12 +285,12 @@ The CSI Drivers installed by the Dell CSI Operator can be updated like any Kuber ```yaml controller: envs: - - name: X_CSI_ENABLE_VOL_HEALTH_MONITOR + - name: X_CSI_HEALTH_MONITOR_ENABLED value: "true" dnsPolicy: ClusterFirstWithHostNet node: envs: - - name: X_CSI_ENABLE_VOL_HEALTH_MONITOR + - name: X_CSI_HEALTH_MONITOR_ENABLED value: "true" ``` ii. Update the sidecar versions and add external-health-monitor sidecar if you want to enable health monitor of CSI volumes from Controller plugin: @@ -324,12 +299,12 @@ The CSI Drivers installed by the Dell CSI Operator can be updated like any Kuber - args: - --volume-name-prefix=csiunity - --default-fstype=ext4 - image: k8s.gcr.io/sig-storage/csi-provisioner:v3.0.0 + image: k8s.gcr.io/sig-storage/csi-provisioner:v3.1.0 imagePullPolicy: IfNotPresent name: provisioner - args: - --snapshot-name-prefix=csiunitysnap - image: k8s.gcr.io/sig-storage/csi-snapshotter:v4.2.1 + image: k8s.gcr.io/sig-storage/csi-snapshotter:v5.0.1 imagePullPolicy: IfNotPresent name: snapshotter - args: @@ -337,13 +312,13 @@ The CSI Drivers installed by the Dell CSI Operator can be updated like any Kuber image: gcr.io/k8s-staging-sig-storage/csi-external-health-monitor-controller:v0.4.0 imagePullPolicy: IfNotPresent name: external-health-monitor - - image: k8s.gcr.io/sig-storage/csi-attacher:v3.3.0 + - image: k8s.gcr.io/sig-storage/csi-attacher:v3.4.0 imagePullPolicy: IfNotPresent name: attacher - - image: k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.3.0 + - image: k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.5.0 imagePullPolicy: IfNotPresent name: registrar - - image: k8s.gcr.io/sig-storage/csi-resizer:v1.3.0 + - image: k8s.gcr.io/sig-storage/csi-resizer:v1.4.0 imagePullPolicy: IfNotPresent name: resizer ``` @@ -358,7 +333,7 @@ data: CSI_LOG_LEVEL: "info" ALLOW_RWO_MULTIPOD_ACCESS: "false" MAX_UNITY_VOLUMES_PER_NODE: "0" - SYNC_NODE_INFO_TIME_INTERVAL: "0" + SYNC_NODE_INFO_TIME_INTERVAL: "15" TENANT_NAME: "" ``` @@ -410,6 +385,9 @@ It should be set separately in the controller and node sections if you want sepa **nodeSelector** Used to specify node selectors for the driver StatefulSet/Deployment and DaemonSet +**fsGroupPolicy** +Defines which FS Group policy mode to be used, Supported modes: None, File and ReadWriteOnceWithFSType + Here is a sample specification annotated with comments to explain each field ```yaml apiVersion: storage.dell.com/v1 @@ -438,7 +416,7 @@ Note - The `image` field should point to the correct image tag for version of th For e.g. - If you wish to install v1.4 of the CSI PowerMax driver, use the image tag `dellemc/csi-powermax:v1.4.0.000R` ### SideCars -Although the sidecars field in the driver specification is optional, it is **strongly** recommended to not modify any details related to sidecars provided (if present) in the sample manifests. The only exception to this is modifications requested by the documentation, for example, filling in blank IPs or other such system-specific data. Any modifications not specifically requested by the documentation should be only done after consulting with Dell EMC support. +Although the sidecars field in the driver specification is optional, it is **strongly** recommended to not modify any details related to sidecars provided (if present) in the sample manifests. The only exception to this is modifications requested by the documentation, for example, filling in blank IPs or other such system-specific data. Any modifications not specifically requested by the documentation should be only done after consulting with Dell support. ### Modify the driver specification * Choose the correct configVersion. Refer the table containing the full list of supported drivers and versions. diff --git a/content/v3/csidriver/installation/operator/isilon.md b/content/v3/csidriver/installation/operator/isilon.md index 62c7f1309a..00e4c69924 100644 --- a/content/v3/csidriver/installation/operator/isilon.md +++ b/content/v3/csidriver/installation/operator/isilon.md @@ -6,7 +6,7 @@ description: > ## Installing CSI Driver for PowerScale via Operator -The CSI Driver for Dell EMC PowerScale can be installed via the Dell CSI Operator. +The CSI Driver for Dell PowerScale can be installed via the Dell CSI Operator. To deploy the Operator, follow the instructions available [here](../). @@ -115,6 +115,7 @@ User can query for CSI-PowerScale driver using the following command: | Parameter | Description | Required | Default | | --------- | ----------- | -------- |-------- | | dnsPolicy | Determines the DNS Policy of the Node service | Yes | ClusterFirstWithHostNet | + | fsGroupPolicy | Defines which FS Group policy mode to be used, Supported modes `None, File and ReadWriteOnceWithFSType` | No | "ReadWriteOnceWithFSType" | | ***Common parameters for node and controller*** | | CSI_ENDPOINT | The UNIX socket address for handling gRPC calls | No | /var/run/csi/csi.sock | | X_CSI_ISI_SKIP_CERTIFICATE_VALIDATION | Specifies whether SSL security needs to be enabled for communication between PowerScale and CSI Driver | No | true | @@ -123,19 +124,61 @@ User can query for CSI-PowerScale driver using the following command: | X_CSI_ISI_AUTOPROBE | To enable auto probing for driver | No | true | | X_CSI_ISI_NO_PROBE_ON_START | Indicates whether the controller/node should probe during initialization | Yes | | | X_CSI_ISI_VOLUME_PATH_PERMISSIONS | The permissions for isi volume directory path | Yes | 0777 | + | X_CSI_ISI_AUTH_TYPE | Indicates the authentication method to be used. If set to 1 then it follows as session-based authentication else basic authentication | No | 0 | | ***Controller parameters*** | | X_CSI_MODE | Driver starting mode | No | controller | | X_CSI_ISI_ACCESS_ZONE | Name of the access zone a volume can be created in | No | System | - | X_CSI_ISI_QUOTA_ENABLED | To enable SmartQuotas | Yes | | + | X_CSI_ISI_QUOTA_ENABLED | To enable SmartQuotas | Yes | | + | nodeSelector | Define node selection constraints for pods of controller deployment | No | | + | X_CSI_HEALTH_MONITOR_ENABLED | Enable/Disable health monitor of CSI volumes from Controller plugin. Provides details of volume status and volume condition. As a prerequisite, external-health-monitor sidecar section should be uncommented in samples which would install the sidecar | No | false | | ***Node parameters*** | | X_CSI_MAX_VOLUMES_PER_NODE | Specify the default value for the maximum number of volumes that the controller can publish to the node | Yes | 0 | - | X_CSI_MODE | Driver starting mode | No | node | + | X_CSI_MODE | Driver starting mode | No | node | + | X_CSI_HEALTH_MONITOR_ENABLED | Enable/Disable health monitor of CSI volumes from node plugin. Provides details of volume usage | No | false | + | ***Side car parameters*** | + | leader-election-lease-duration | Duration, that non-leader candidates will wait to force acquire leadership | No | 20s | + | leader-election-renew-deadline | Duration, that the acting leader will retry refreshing leadership before giving up | No | 15s | + | leader-election-retry-period | Duration, the LeaderElector clients should wait between tries of actions | No | 5s | + 6. Execute the following command to create PowerScale custom resource: ```kubectl create -f ``` . This command will deploy the CSI-PowerScale driver in the namespace specified in the input YAML file. **Note** : 1. From CSI-PowerScale v1.6.0 and higher, Storage class and VolumeSnapshotClass will **not** be created as part of driver deployment. The user has to create Storageclass and Volume Snapshot Class. - 2. Node selector and node tolerations can be added in both controller parameters and node parameters section, based on the need. - 3. "Kubelet config dir path" is not yet configurable in case of Operator based driver installation. - 4. Also, snapshotter and resizer sidecars are not optional to choose, it comes default with Driver installation. + 2. "Kubelet config dir path" is not yet configurable in case of Operator based driver installation. + 3. Also, snapshotter and resizer sidecars are not optional to choose, it comes default with Driver installation. + +## Volume Health Monitoring +This feature is introduced in CSI Driver for unity version 2.1.0. + +### Operator based installation + +Volume Health Monitoring feature is optional and by default this feature is disabled for drivers when installed via operator. +To enable this feature, add the below block to the driver manifest before installing the driver. This ensures to install external health monitor sidecar. To get the volume health state `value` under controller should be set to true as seen below. To get the volume stats `value` under node should be set to true. + + ```yaml + # Uncomment the following to install 'external-health-monitor' sidecar to enable health monitor of CSI volumes from Controller plugin. + # Also set the env variable controller.envs.X_CSI_HEALTH_MONITOR_ENABLED to "true". + # - name: external-health-monitor + # args: ["--monitor-interval=60s"] + + # Install the 'external-health-monitor' sidecar accordingly. + # Allowed values: + # true: enable checking of health condition of CSI volumes + # false: disable checking of health condition of CSI volumes + # Default value: false + controller: + envs: + - name: X_CSI_HEALTH_MONITOR_ENABLED + value: "true" + node: + envs: + # X_CSI_HEALTH_MONITOR_ENABLED: Enable/Disable health monitor of CSI volumes from node plugin - volume usage + # Allowed values: + # true: enable checking of health condition of CSI volumes + # false: disable checking of health condition of CSI volumes + # Default value: false + - name: X_CSI_HEALTH_MONITOR_ENABLED + value: "true" + ``` diff --git a/content/v3/csidriver/installation/operator/non-olm-1.jpg b/content/v3/csidriver/installation/operator/non-olm-1.jpg index 2a5fb5c249..3cc966646a 100644 Binary files a/content/v3/csidriver/installation/operator/non-olm-1.jpg and b/content/v3/csidriver/installation/operator/non-olm-1.jpg differ diff --git a/content/v3/csidriver/installation/operator/powerflex.md b/content/v3/csidriver/installation/operator/powerflex.md index b43af0aa12..ea959f4639 100644 --- a/content/v3/csidriver/installation/operator/powerflex.md +++ b/content/v3/csidriver/installation/operator/powerflex.md @@ -5,7 +5,7 @@ description: > --- ## Installing CSI Driver for PowerFlex via Operator -The CSI Driver for Dell EMC PowerFlex can be installed via the Dell CSI Operator. +The CSI Driver for Dell PowerFlex can be installed via the Dell CSI Operator. To deploy the Operator, follow the instructions available [here](../). @@ -66,13 +66,13 @@ Kubernetes Operators make it easy to deploy and manage the entire lifecycle of c ### Manual SDC Deployment -For detailed PowerFlex installation procedure, see the _Dell EMC PowerFlex Deployment Guide_. Install the PowerFlex SDC as follows: +For detailed PowerFlex installation procedure, see the _Dell PowerFlex Deployment Guide_. Install the PowerFlex SDC as follows: **Steps** -1. Download the PowerFlex SDC from [Dell EMC Online support](https://www.dell.com/support). The filename is EMC-ScaleIO-sdc-*.rpm, where * is the SDC name corresponding to the PowerFlex installation version. +1. Download the PowerFlex SDC from [Dell Online support](https://www.dell.com/support). The filename is EMC-ScaleIO-sdc-*.rpm, where * is the SDC name corresponding to the PowerFlex installation version. 2. Export the shell variable _MDM_IP_ in a comma-separated list using `export MDM_IP=xx.xxx.xx.xx,xx.xxx.xx.xx`, where xxx represents the actual IP address in your environment. This list contains the IP addresses of the MDMs. -3. Install the SDC per the _Dell EMC PowerFlex Deployment Guide_: +3. Install the SDC per the _Dell PowerFlex Deployment Guide_: - For Red Hat Enterprise Linux and CentOS, run `rpm -iv ./EMC-ScaleIO-sdc-*.x86_64.rpm`, where * is the SDC name corresponding to the PowerFlex installation version. 4. To add more MDM_IP for multi-array support, run `/opt/emc/scaleio/sdc/bin/drv_cfg --add_mdm --ip 10.xx.xx.xx.xx,10.xx.xx.xx` diff --git a/content/v3/csidriver/installation/operator/powermax.md b/content/v3/csidriver/installation/operator/powermax.md index 0ebcd1ac23..781eb18fe7 100644 --- a/content/v3/csidriver/installation/operator/powermax.md +++ b/content/v3/csidriver/installation/operator/powermax.md @@ -6,7 +6,7 @@ description: > ## Installing CSI Driver for PowerMax via Operator -CSI Driver for Dell EMC PowerMax can be installed via the Dell CSI Operator. +CSI Driver for Dell PowerMax can be installed via the Dell CSI Operator. To deploy the Operator, follow the instructions available [here](../). @@ -28,17 +28,17 @@ Create a secret named powermax-certs in the namespace where the CSI PowerMax dri Create a file called powermax-creds.yaml with the following content: ```yaml apiVersion: v1 - kind: Secret - metadata: + kind: Secret + metadata: name: powermax-creds - # Replace driver-namespace with the namespace where driver is being deployed - namespace: - type: Opaque - data: - # set username to the base64 encoded username - username: - # set password to the base64 encoded password - password: + # Replace driver-namespace with the namespace where driver is being deployed + namespace: + type: Opaque + data: + # set username to the base64 encoded username + username: + # set password to the base64 encoded password + password: # Uncomment the following key if you wish to use ISCSI CHAP authentication (v1.3.0 onwards) # chapsecret: ``` @@ -65,10 +65,11 @@ Create a secret named powermax-certs in the namespace where the CSI PowerMax dri | X_CSI_MANAGED_ARRAYS | List of comma-separated array ID(s) which will be managed by the driver | Yes | - | | X_CSI_POWERMAX_PROXY_SERVICE_NAME | Name of CSI PowerMax ReverseProxy service. Leave blank if not using reverse proxy | No | - | | X_CSI_GRPC_MAX_THREADS | Number of concurrent grpc requests allowed per client | No | 4 | -| X_CSI_POWERMAX_DRIVER_NAME | Set custom CSI driver name. For more details on this feature see the related [documentation](../../../features/powermax/#custom-driver-name) | No | - | -| ***Node parameters***| + | X_CSI_POWERMAX_DRIVER_NAME | Set custom CSI driver name. For more details on this feature see the related [documentation](../../../features/powermax/#custom-driver-name) | No | - | + | X_CSI_HEALTH_MONITOR_ENABLED | Enable/Disable health monitor of CSI volumes from Controller and Node plugin. Provides details of volume status, usage and volume condition. As a prerequisite, external-health-monitor sidecar section should be uncommented in samples which would install the sidecar | No | false | + | ***Node parameters***| | X_CSI_POWERMAX_ISCSI_ENABLE_CHAP | Enable ISCSI CHAP authentication. For more details on this feature see the related [documentation](../../../features/powermax/#iscsi-chap) | No | false | -5. Execute the following command to create the PowerMax custom resource:`kubectl create -f `. The above command will deploy the CSI-PowerMax driver. +5. Execute the following command to create the PowerMax custom resource:`kubectl create -f `. The above command will deploy the CSI-PowerMax driver. ### CSI PowerMax ReverseProxy @@ -198,8 +199,8 @@ metadata: namespace: test-powermax spec: driver: - # Config version for CSI PowerMax v2.1.0 driver - configVersion: v2.1.0 + # Config version for CSI PowerMax v2.2.0 driver + configVersion: v2.2.0 # replica: Define the number of PowerMax controller nodes # to deploy to the Kubernetes release # Allowed values: n, where n > 0 @@ -208,8 +209,8 @@ spec: dnsPolicy: ClusterFirstWithHostNet forceUpdate: false common: - # Image for CSI PowerMax driver v2.1.0 - image: dellemc/csi-powermax:v2.1.0 + # Image for CSI PowerMax driver v2.2.0 + image: dellemc/csi-powermax:v2.2.0 # imagePullPolicy: Policy to determine if the image should be pulled prior to starting the container. # Allowed values: # Always: Always pull the image. @@ -223,54 +224,70 @@ spec: # Examples: "000000000001", "000000000002" - name: X_CSI_MANAGED_ARRAYS value: "000000000000,000000000001" - # X_CSI_POWERMAX_ENDPOINT: Address of the Unisphere server that is managing the PowerMax arrays - # Default value: None - # Example: https://0.0.0.1:8443 + # X_CSI_POWERMAX_ENDPOINT: Address of the Unisphere server that is managing the PowerMax arrays + # Default value: None + # Example: https://0.0.0.1:8443 - name: X_CSI_POWERMAX_ENDPOINT value: "https://0.0.0.0:8443/" - # X_CSI_K8S_CLUSTER_PREFIX: Define a prefix that is appended onto - # all resources created in the Array - # This should be unique per K8s/CSI deployment - # maximum length of this value is 3 characters - # Default value: None - # Examples: "XYZ", "EMC" - # Examples: "XYZ", "EMC" + # X_CSI_K8S_CLUSTER_PREFIX: Define a prefix that is appended onto + # all resources created in the Array + # This should be unique per K8s/CSI deployment + # maximum length of this value is 3 characters + # Default value: None + # Examples: "XYZ", "EMC" - name: X_CSI_K8S_CLUSTER_PREFIX value: "XYZ" - # X_CSI_POWERMAX_PORTGROUPS: Define the set of existing port groups that the driver will use. - # It is a comma separated list of portgroup names. - # Required only in case of iSCSI port groups - # Allowed values: iSCSI Port Group names - # Default value: None - # Examples: "pg1", "pg1, pg2" + # X_CSI_POWERMAX_PORTGROUPS: Define the set of existing port groups that the driver will use. + # It is a comma separated list of portgroup names. + # Required only in case of iSCSI port groups + # Allowed values: iSCSI Port Group names + # Default value: None + # Examples: "pg1", "pg1, pg2" - name: "X_CSI_POWERMAX_PORTGROUPS" value: "" - # "X_CSI_TRANSPORT_PROTOCOL" can be "FC" or "FIBRE" for fibrechannel, - # "ISCSI" for iSCSI, or "" for autoselection. - # Allowed values: - # "FC" - Fiber Channel protocol - # "FIBER" - Fiber Channel protocol - # "ISCSI" - iSCSI protocol - # "" - Automatic selection of transport protocol - # Default value: "" + # "X_CSI_TRANSPORT_PROTOCOL" can be "FC" or "FIBRE" for fibrechannel, + # "ISCSI" for iSCSI, or "" for autoselection. + # Allowed values: + # "FC" - Fiber Channel protocol + # "FIBER" - Fiber Channel protocol + # "ISCSI" - iSCSI protocol + # "" - Automatic selection of transport protocol + # Default value: "" - name: "X_CSI_TRANSPORT_PROTOCOL" value: "" - # X_CSI_POWERMAX_PROXY_SERVICE_NAME: Refers to the name of the proxy service in kubernetes - # Set this to "powermax-reverseproxy" if you are installing the proxy - # Allowed values: "powermax-reverseproxy" - # default values: "" + # X_CSI_POWERMAX_PROXY_SERVICE_NAME: Refers to the name of the proxy service in kubernetes + # Set this to "powermax-reverseproxy" if you are installing the proxy + # Allowed values: "powermax-reverseproxy" + # default values: "" - name: "X_CSI_POWERMAX_PROXY_SERVICE_NAME" value: "" - # X_CSI_GRPC_MAX_THREADS: Defines the maximum number of concurrent grpc requests. - # Set this value to a higher number (max 50) if you are using the proxy - # Allowed values: n, where n > 4 - # default values: None + # X_CSI_GRPC_MAX_THREADS: Defines the maximum number of concurrent grpc requests. + # Set this value to a higher number (max 50) if you are using the proxy + # Allowed values: n, where n > 4 + # default values: None - name: "X_CSI_GRPC_MAX_THREADS" value: "4" - node: + sideCars: + # Uncomment the following to install 'external-health-monitor' sidecar to enable health monitor of CSI volumes from Controller plugin. + # Also set the env variable controller.envs.X_CSI_HEALTH_MONITOR_ENABLED to "true" for controller plugin. + # Also set the env variable node.envs.X_CSI_HEALTH_MONITOR_ENABLED to "true" for node plugin. + #- name: external-health-monitor + # args: ["--monitor-interval=300s"] + + controller: envs: - # X_CSI_POWERMAX_ISCSI_ENABLE_CHAP: Determine if the driver is going to configure + # X_CSI_HEALTH_MONITOR_ENABLED: Determines if the controller plugin will monitor health of CSI volumes- volume status, volume condition + # Install the 'external-health-monitor' sidecar accordingly. + # Allowed values: + # true: enable checking of health condition of CSI volumes + # false: disable checking of health condition of CSI volumes + # Default value: false + - name: X_CSI_HEALTH_MONITOR_ENABLED + value: "false" + node: + envs: + # X_CSI_POWERMAX_ISCSI_ENABLE_CHAP: Determine if the node plugin is going to configure # ISCSI node databases on the nodes with the CHAP credentials # If enabled, the CHAP secret must be provided in the credentials secret # and set to the key "chapsecret" @@ -280,6 +297,13 @@ spec: # Default value: "false" - name: "X_CSI_POWERMAX_ISCSI_ENABLE_CHAP" value: "false" + # X_CSI_HEALTH_MONITOR_ENABLED: Enable/Disable health monitor of CSI volumes from node plugin- volume usage, volume condition + # Allowed values: + # true: enable checking of health condition of CSI volumes + # false: disable checking of health condition of CSI volumes + # Default value: false + - name: X_CSI_HEALTH_MONITOR_ENABLED + value: "false" --- apiVersion: v1 kind: ConfigMap @@ -299,3 +323,32 @@ Note: only present with `dell-csi-helm-installer`. - `Kubelet config dir path` is not yet configurable in case of Operator based driver installation. - Also, snapshotter and resizer sidecars are not optional to choose, it comes default with Driver installation. + +## Volume Health Monitoring +This feature is introduced in CSI Driver for PowerMax version 2.2.0. + +### Operator based installation +Volume Health Monitoring feature is optional and by default this feature is disabled for drivers when installed via operator. + +To enable this feature, set `X_CSI_HEALTH_MONITOR_ENABLED` to `true` in the driver manifest under controller and node section. Also, install the `external-health-monitor` from `sideCars` section for controller plugin. +To get the volume health state `value` under controller should be set to true as seen below. To get the volume stats `value` under node should be set to true. + + # Install the 'external-health-monitor' sidecar accordingly. + # Allowed values: + # true: enable checking of health condition of CSI volumes + # false: disable checking of health condition of CSI volumes + # Default value: false + controller: + envs: + - name: X_CSI_HEALTH_MONITOR_ENABLED + value: "true" + node: + envs: + # X_CSI_HEALTH_MONITOR_ENABLED: Enable/Disable health monitor of CSI volumes from node plugin - volume usage + # Allowed values: + # true: enable checking of health condition of CSI volumes + # false: disable checking of health condition of CSI volumes + # Default value: false + - name: X_CSI_HEALTH_MONITOR_ENABLED + value: "true" +``` \ No newline at end of file diff --git a/content/v3/csidriver/installation/operator/powerstore.md b/content/v3/csidriver/installation/operator/powerstore.md index 8fb2f30f95..ae60025943 100644 --- a/content/v3/csidriver/installation/operator/powerstore.md +++ b/content/v3/csidriver/installation/operator/powerstore.md @@ -5,7 +5,7 @@ description: > --- ## Installing CSI Driver for PowerStore via Operator -The CSI Driver for Dell EMC PowerStore can be installed via the Dell CSI Operator. +The CSI Driver for Dell PowerStore can be installed via the Dell CSI Operator. To deploy the Operator, follow the instructions available [here](../). @@ -30,8 +30,10 @@ Kubernetes Operators make it easy to deploy and manage the entire lifecycle of c password: "password" # password for connecting to API skipCertificateValidation: true # indicates if client side validation of (management)server's certificate can be skipped isDefault: true # treat current array as a default (would be used by storage classes without arrayID parameter) - blockProtocol: "auto" # what SCSI transport protocol use on node side (FC, ISCSI, None, or auto) + blockProtocol: "auto" # what SCSI transport protocol use on node side (FC, ISCSI, NVMeTCP, None, or auto) nasName: "nas-server" # what NAS should be used for NFS volumes + nfsAcls: "0777" # (Optional) defines permissions - POSIX mode bits or NFSv4 ACLs, to be set on NFS target mount directory. + # NFSv4 ACls are supported for NFSv4 shares on NFSv4 enabled NAS servers only. POSIX ACLs are not supported and only POSIX mode bits are supported for NFSv3 shares. ``` Change the parameters with relevant values for your PowerStore array. @@ -56,22 +58,123 @@ Kubernetes Operators make it easy to deploy and manage the entire lifecycle of c ``` 4. Create a Custom Resource (CR) for PowerStore using the sample files provided [here](https://github.com/dell/dell-csi-operator/tree/master/samples). + +Below is a sample CR: + +```yaml +apiVersion: storage.dell.com/v1 +kind: CSIPowerStore +metadata: + name: test-powerstore + namespace: test-powerstore +spec: + driver: + configVersion: v2.2.0 + replicas: 2 + dnsPolicy: ClusterFirstWithHostNet + forceUpdate: false + fsGroupPolicy: ReadWriteOnceWithFSType + common: + image: "dellemc/csi-powerstore:v2.2.0" + imagePullPolicy: IfNotPresent + envs: + - name: X_CSI_POWERSTORE_NODE_NAME_PREFIX + value: "csi" + - name: X_CSI_FC_PORTS_FILTER_FILE_PATH + value: "/etc/fc-ports-filter" + sideCars: + - name: external-health-monitor + args: ["--monitor-interval=60s"] + + controller: + envs: + - name: X_CSI_HEALTH_MONITOR_ENABLED + value: "false" + - name: X_CSI_NFS_ACLS + value: "0777" + nodeSelector: + node-role.kubernetes.io/master: "" + tolerations: + - key: "node-role.kubernetes.io/master" + operator: "Exists" + effect: "NoSchedule" + + node: + envs: + - name: "X_CSI_POWERSTORE_ENABLE_CHAP" + value: "true" + - name: X_CSI_HEALTH_MONITOR_ENABLED + value: "false" + nodeSelector: + node-role.kubernetes.io/worker: "" + + tolerations: + - key: "node-role.kubernetes.io/worker" + operator: "Exists" + effect: "NoSchedule" +--- +apiVersion: v1 +kind: ConfigMap +metadata: + name: powerstore-config-params + namespace: test-powerstore +data: + driver-config-params.yaml: | + CSI_LOG_LEVEL: "debug" + CSI_LOG_FORMAT: "JSON" +``` + 5. Users must configure the parameters in CR. The following table lists the primary configurable parameters of the PowerStore driver and their default values: - | Parameter | Description | Required | Default | - | --------- | ----------- | -------- |-------- | - | replicas | Controls the number of controller pods you deploy. If the number of controller pods is greater than the number of available nodes, the excess pods will be pending state till new nodes are available for scheduling. Default is 2 which allows for Controller high availability. | Yes | 2 | - | namespace | Specifies namespace where the drive will be installed | Yes | "test-powerstore" | - | ***Common parameters for node and controller*** | - | X_CSI_POWERSTORE_NODE_NAME_PREFIX | Prefix to add to each node registered by the CSI driver | Yes | "csi-node" - | X_CSI_FC_PORTS_FILTER_FILE_PATH | To set path to the file which provides a list of WWPN which should be used by the driver for FC connection on this node | No | "/etc/fc-ports-filter" | - | ***Controller parameters*** | - | X_CSI_POWERSTORE_EXTERNAL_ACCESS | allows specifying additional entries for hostAccess of NFS volumes. Both single IP address and subnet are valid entries | No | " "| - | ***Node parameters*** | - | X_CSI_POWERSTORE_ENABLE_CHAP | Set to true if you want to enable iSCSI CHAP feature | No | false | +| Parameter | Description | Required | Default | +| --------- | ----------- | -------- |-------- | +| replicas | Controls the number of controller pods you deploy. If the number of controller pods is greater than the number of available nodes, the excess pods will be pending state till new nodes are available for scheduling. Default is 2 which allows for Controller high availability. | Yes | 2 | +| namespace | Specifies namespace where the drive will be installed | Yes | "test-powerstore" | +| fsGroupPolicy | Defines which FS Group policy mode to be used, Supported modes `None, File and ReadWriteOnceWithFSType` | No |"ReadWriteOnceWithFSType"| +| ***Common parameters for node and controller*** | +| X_CSI_POWERSTORE_NODE_NAME_PREFIX | Prefix to add to each node registered by the CSI driver | Yes | "csi-node" +| X_CSI_FC_PORTS_FILTER_FILE_PATH | To set path to the file which provides a list of WWPN which should be used by the driver for FC connection on this node | No | "/etc/fc-ports-filter" | +| ***Controller parameters*** | +| X_CSI_POWERSTORE_EXTERNAL_ACCESS | allows specifying additional entries for hostAccess of NFS volumes. Both single IP address and subnet are valid entries | No | " "| +| X_CSI_NFS_ACLS | Defines permissions - POSIX mode bits or NFSv4 ACLs, to be set on NFS target mount directory. | No | "0777" | +| ***Node parameters*** | +| X_CSI_POWERSTORE_ENABLE_CHAP | Set to true if you want to enable iSCSI CHAP feature | No | false | 6. Execute the following command to create PowerStore custom resource:`kubectl create -f `. The above command will deploy the CSI-PowerStore driver. - After that the driver should be installed, you can check the condition of driver pods by running `kubectl get all -n ` +## Volume Health Monitoring + +Volume Health Monitoring feature is optional and by default this feature is disabled for drivers when installed via operator. +To enable this feature, add the below block to the driver manifest before installing the driver. This ensures to install external +health monitor sidecar. To get the volume health state value under controller should be set to true as seen below. To get the +volume stats value under node should be set to true. + ```yaml + sideCars: + # Uncomment the following to install 'external-health-monitor' sidecar to enable health monitor of CSI volumes from Controller plugin. + # Also set the env variable controller.envs.X_CSI_HEALTH_MONITOR_ENABLED to "true". + - name: external-health-monitor + args: ["--monitor-interval=60s"] + controller: + envs: + # X_CSI_HEALTH_MONITOR_ENABLED: Enable/Disable health monitor of CSI volumes from Controller plugin- volume status, volume condition. + # Install the 'external-health-monitor' sidecar accordingly. + # Allowed values: + # true: enable checking of health condition of CSI volumes + # false: disable checking of health condition of CSI volumes + # Default value: false + - name: X_CSI_HEALTH_MONITOR_ENABLED + value: "false" + node: + envs: + # X_CSI_HEALTH_MONITOR_ENABLED: Enable/Disable health monitor of CSI volumes from node plugin- volume usage, volume condition + # Allowed values: + # true: enable checking of health condition of CSI volumes + # false: disable checking of health condition of CSI volumes + # Default value: false + - name: X_CSI_HEALTH_MONITOR_ENABLED + value: "false" + ``` + ## Dynamic Logging Configuration This feature is introduced in CSI Driver for unity version 2.0.0. @@ -85,4 +188,4 @@ kubectl edit configmap -n csi-powerstore powerstore-config-params ``` **Note** : 1. "Kubelet config dir path" is not yet configurable in case of Operator based driver installation. - 2. Also, snapshotter and resizer sidecars are not optional to choose, it comes default with Driver installation. + 2. Also, snapshotter and resizer sidecars are not optional to choose, it comes default with Driver installation. diff --git a/content/v3/csidriver/installation/operator/unity.md b/content/v3/csidriver/installation/operator/unity.md index f11b998c2f..93c0bb0f2f 100644 --- a/content/v3/csidriver/installation/operator/unity.md +++ b/content/v3/csidriver/installation/operator/unity.md @@ -19,7 +19,7 @@ The following table lists driver configuration parameters for multiple storage a | password | Password for accessing Unity system | true | - | | restGateway | REST API gateway HTTPS endpoint Unity system| true | - | | arrayId | ArrayID for Unity system | true | - | -| isDefaultArray | An array having isDefaultArray=true is for backward compatibility. This parameter should occur once in the list. | false | false | +| isDefaultArray | An array having isDefaultArray=true is for backward compatibility. This parameter should occur once in the list. | true | - | Ex: secret.yaml @@ -41,7 +41,7 @@ Ex: secret.yaml ``` -`kubectl create secret generic unity-creds -n unity --from-file=config=secret.secret` +`kubectl create secret generic unity-creds -n unity --from-file=config=secret.yaml` Use the following command to replace or update the secret @@ -81,9 +81,11 @@ Users should configure the parameters in CR. The following table lists the prima | ***Controller parameters*** | | | | | X_CSI_MODE | Driver starting mode | No | controller | | X_CSI_UNITY_AUTOPROBE | To enable auto probing for driver | No | true | + | X_CSI_HEALTH_MONITOR_ENABLED | Enable/Disable health monitor of CSI volumes from Controller plugin | No | | | ***Node parameters*** | | | | | X_CSI_MODE | Driver starting mode | No | node | | X_CSI_ISCSI_CHROOT | Path to which the driver will chroot before running any iscsi commands. | No | /noderoot | + | X_CSI_HEALTH_MONITOR_ENABLED | Enable/Disable health monitor of CSI volumes from Node plugin | No | | | ### Example CR for Unity Refer samples from [here](https://github.com/dell/dell-csi-operator/tree/master/samples). Below is an example CR: @@ -95,18 +97,80 @@ metadata: namespace: test-unity spec: driver: - configVersion: v2.0.0 + configVersion: v2.2.0 replicas: 2 dnsPolicy: ClusterFirstWithHostNet forceUpdate: false common: - image: "dellemc/csi-unity:v2.0.0" + image: "dellemc/csi-unity:v2.2.0" imagePullPolicy: IfNotPresent sideCars: - name: provisioner args: ["--volume-name-prefix=csiunity","--default-fstype=ext4"] - name: snapshotter args: ["--snapshot-name-prefix=csiunitysnap"] + # Enable/Disable health monitor of CSI volumes from node plugin. Provides details of volume usage. + # - name: external-health-monitor + # args: ["--monitor-interval=60s"] + + controller: + envs: + # X_CSI_ENABLE_VOL_HEALTH_MONITOR: Enable/Disable health monitor of CSI volumes from Controller plugin. Provides details of volume status and volume condition. + # As a prerequisite, external-health-monitor sidecar section should be uncommented in samples which would install the sidecar + # Allowed values: + # true: enable checking of health condition of CSI volumes + # false: disable checking of health condition of CSI volumes + # Default value: false + - name: X_CSI_HEALTH_MONITOR_ENABLED + value: "false" + + # nodeSelector: Define node selection constraints for controller pods. + # For the pod to be eligible to run on a node, the node must have each + # of the indicated key-value pairs as labels. + # Leave as blank to consider all nodes + # Allowed values: map of key-value pairs + # Default value: None + # Examples: + # node-role.kubernetes.io/master: "" + nodeSelector: + # node-role.kubernetes.io/master: "" + + # tolerations: Define tolerations for the controllers, if required. + # Leave as blank to install controller on worker nodes + # Default value: None + tolerations: + # - key: "node-role.kubernetes.io/master" + # operator: "Exists" + # effect: "NoSchedule" + + node: + envs: + # X_CSI_HEALTH_MONITOR_ENABLED: Enable/Disable health monitor of CSI volumes from node plugin - volume usage + # Allowed values: + # true: enable checking of health condition of CSI volumes + # false: disable checking of health condition of CSI volumes + # Default value: false + - name: X_CSI_HEALTH_MONITOR_ENABLED + value: "false" + # nodeSelector: Define node selection constraints for node pods. + # For the pod to be eligible to run on a node, the node must have each + # of the indicated key-value pairs as labels. + # Leave as blank to consider all nodes + # Allowed values: map of key-value pairs + # Default value: None + # Examples: + # node-role.kubernetes.io/master: "" + nodeSelector: + # node-role.kubernetes.io/master: "" + + # tolerations: Define tolerations for the controllers, if required. + # Leave as blank to install controller on worker nodes + # Default value: None + tolerations: + # - key: "node-role.kubernetes.io/master" + # operator: "Exists" + # effect: "NoSchedule" + --- apiVersion: v1 kind: ConfigMap @@ -118,7 +182,7 @@ data: CSI_LOG_LEVEL: "info" ALLOW_RWO_MULTIPOD_ACCESS: "false" MAX_UNITY_VOLUMES_PER_NODE: "0" - SYNC_NODE_INFO_TIME_INTERVAL: "0" + SYNC_NODE_INFO_TIME_INTERVAL: "15" TENANT_NAME: "" ``` @@ -165,11 +229,11 @@ To enable this feature, add the below block to the driver manifest before instal node: envs: - # X_CSI_ENABLE_VOL_HEALTH_MONITOR: Enable/Disable health monitor of CSI volumes from node plugin - volume usage + # X_CSI_HEALTH_MONITOR_ENABLED: Enable/Disable health monitor of CSI volumes from node plugin - volume usage # Allowed values: # true: enable checking of health condition of CSI volumes # false: disable checking of health condition of CSI volumes # Default value: false - - name: X_CSI_ENABLE_VOL_HEALTH_MONITOR + - name: X_CSI_HEALTH_MONITOR_ENABLED value: "false" ``` diff --git a/content/v3/csidriver/installation/test/powerflex.md b/content/v3/csidriver/installation/test/powerflex.md index d5c3d106b9..60890928ed 100644 --- a/content/v3/csidriver/installation/test/powerflex.md +++ b/content/v3/csidriver/installation/test/powerflex.md @@ -6,7 +6,7 @@ description: Tests to validate PowerFlex CSI Driver installation This section provides multiple methods to test driver functionality in your environment. -**Note**: To run the test for CSI Driver for Dell EMC PowerFlex, install Helm 3. +**Note**: To run the test for CSI Driver for Dell PowerFlex, install Helm 3. ## Test deploying a simple pod with PowerFlex storage @@ -91,7 +91,7 @@ The `snaptest.sh` script will create a snapshot using the definitions in the `sn *NOTE:* The `snaptest.sh` shell script creates the snapshots, describes them, and then deletes them. You can see your snapshots using `kubectl get volumesnapshot -n helmtest-vxflexos`. -Notice that this _VolumeSnapshot_ class has a reference to a _snapshotClassName: vxflexos-snapclass_. The CSI Driver for Dell EMC PowerFlex installation does not create this class. You will need +Notice that this _VolumeSnapshot_ class has a reference to a _snapshotClassName: vxflexos-snapclass_. The CSI Driver for Dell PowerFlex installation does not create this class. You will need to create instance of _VolumeSnapshotClass_ from one of default samples in `samples/volumesnapshotclass' directory. ## Test restoring from a snapshot diff --git a/content/v3/csidriver/installation/test/powermax.md b/content/v3/csidriver/installation/test/powermax.md index 9c0bd6109e..01b87aca59 100644 --- a/content/v3/csidriver/installation/test/powermax.md +++ b/content/v3/csidriver/installation/test/powermax.md @@ -6,9 +6,9 @@ description: Tests to validate PowerMax CSI Driver installation This section provides multiple methods to test driver functionality in your environment. The tests are validated using bash as the default shell. -**Note**: To run the test for CSI Driver for Dell EMC PowerMax, install Helm 3. +**Note**: To run the test for CSI Driver for Dell PowerMax, install Helm 3. -The _csi-powermax_ repository includes examples of how you can use CSI Driver for Dell EMC PowerMax. The shell scripts are used to automate the installation and uninstallation of helm charts for the creation of Pods with a different number of volumes in a given namespace using the storageclass provided. To test the installation of the CSI driver, perform these tests: +The _csi-powermax_ repository includes examples of how you can use CSI Driver for Dell PowerMax. The shell scripts are used to automate the installation and uninstallation of helm charts for the creation of Pods with a different number of volumes in a given namespace using the storageclass provided. To test the installation of the CSI driver, perform these tests: - Volume clone test - Volume test - Snapshot test @@ -83,3 +83,91 @@ Application prefix is the name of the application that can be used to group the ApplicationPrefix: ``` >Note: Supported length of storage group for PowerMax is 64 characters. Storage group name is of the format "csi-`clusterprefix`-`application prefix`-`SLO name`-`SRP name`-SG". Based on the other inputs like clusterprefix,SLO name and SRP name maximum length of the ApplicationPrefix can vary. + +## Consuming existing volumes with static provisioning + +Use this procedure to consume existing volumes with static provisioning. + +1. Open your Unisphere for Powermax, and take a note of volume-id. +2. Create PersistentVolume and use this volume-id as a volumeHandle in the manifest. Modify other parameters according to your needs. +3. In the following example, storage class is assumed as 'powermax', cluster prefix as 'ABC' and volume's internal name as '00001', array ID as '000000000001', volume ID as '1abc23456'. The volume-handle should be in the format of `csi`-`clusterPrefix`-`volumeNamePrefix`-`id`-`arrayID`-`volumeID`. + +```yaml +apiVersion: v1 +kind: PersistentVolume +metadata: + name: pvol + namespace: test +spec: + accessModes: + - ReadWriteOnce + capacity: + storage: 8Gi + csi: + driver: csi-powermax.dellemc.com + volumeHandle: csi-ABC-pmax-1abc23456-000000000001-00001 + persistentVolumeReclaimPolicy: Retain + storageClassName: powermax + volumeMode: Filesystem +``` + +3. Create PersistentVolumeClaim to use this PersistentVolume. + +```yaml +kind: PersistentVolumeClaim +apiVersion: v1 +metadata: + name: pvc + namespace: test +spec: + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 8Gi + storageClassName: powermax + volumeMode: Filesystem + volumeName: pvol +``` + +4. Then use this PVC as a volume in a pod. + +```yaml +apiVersion: v1 +kind: ServiceAccount +metadata: + name: powermaxtest + namespace: test +--- +kind: StatefulSet +apiVersion: apps/v1 +metadata: + name: powermaxtest + namespace: test +spec: + selector: + matchLabels: + app: powermaxtest + serviceName: staticprovisioning + template: + metadata: + labels: + app: powermaxtest + spec: + serviceAccount: powermaxtest + containers: + - name: test + image: docker.io/centos:latest + command: [ "/bin/sleep", "3600" ] + volumeMounts: + - mountPath: "/data" + name: pvc + volumes: + - name: pvc + persistentVolumeClaim: + claimName: pvc +``` + +5. After the pod becomes `Ready` and `Running`, you can start to use this pod and volume. + +>Note: CSI driver for PowerMax will create the necessary objects like Storage group, HostID and Masking View. They must not be created manually. diff --git a/content/v3/csidriver/installation/test/powerscale.md b/content/v3/csidriver/installation/test/powerscale.md index 7d47368830..96dedccdbf 100644 --- a/content/v3/csidriver/installation/test/powerscale.md +++ b/content/v3/csidriver/installation/test/powerscale.md @@ -6,7 +6,7 @@ description: Tests to validate PowerScale CSI Driver installation This section provides multiple methods to test driver functionality in your environment. -**Note**: To run the test for CSI Driver for Dell EMC PowerScale, install Helm 3. +**Note**: To run the test for CSI Driver for Dell PowerScale, install Helm 3. ## Test deploying a simple pod with PowerScale storage diff --git a/content/v3/csidriver/installation/test/unity.md b/content/v3/csidriver/installation/test/unity.md index d675756696..95998ad511 100644 --- a/content/v3/csidriver/installation/test/unity.md +++ b/content/v3/csidriver/installation/test/unity.md @@ -4,6 +4,7 @@ linktitle: Unity description: Tests to validate Unity CSI Driver installation --- +## Test deploying a simple Pod and Pvc with Unity storage In the repository, a simple test manifest exists that creates three different PersistentVolumeClaims using default NFS and iSCSI and FC storage classes and automatically mounts them to the pod. **Steps** @@ -26,3 +27,13 @@ You can find all the created resources in `test-unity` namespace. ```bash kubectl delete -f ./test/sample.yaml ``` + +## Support for SLES 15 SP2 + +The CSI Driver for Dell Unity requires the following set of packages installed on all worker nodes that run on SLES 15 SP2. + +- open-iscsi **open-iscsi is required in order to make use of iSCSI protocol for provisioning** +- nfs-utils **nfs-utils is required in order to make use of NFS protocol for provisioning** +- multipath-tools **multipath-tools is required in order to make use of FC and iSCSI protocols for provisioning** + +After installing open-iscsi, ensure "iscsi" and "iscsid" services have been started and /etc/isci/initiatorname.iscsi is created and has the host initiator id. The pre-requisites are mandatory for provisioning with the iSCSI protocol to work. diff --git a/content/v3/csidriver/partners/ophub1.png b/content/v3/csidriver/partners/ophub1.png index 5a728c6559..b86e59cd20 100644 Binary files a/content/v3/csidriver/partners/ophub1.png and b/content/v3/csidriver/partners/ophub1.png differ diff --git a/content/v3/csidriver/partners/ophub2.png b/content/v3/csidriver/partners/ophub2.png index 2083e7aa49..2094ebd6c2 100644 Binary files a/content/v3/csidriver/partners/ophub2.png and b/content/v3/csidriver/partners/ophub2.png differ diff --git a/content/v3/csidriver/partners/ophub3.png b/content/v3/csidriver/partners/ophub3.png index cf8ba75a27..84773431cf 100644 Binary files a/content/v3/csidriver/partners/ophub3.png and b/content/v3/csidriver/partners/ophub3.png differ diff --git a/content/v3/csidriver/partners/redhat.md b/content/v3/csidriver/partners/redhat.md index 1a5408788b..28299fe9d4 100644 --- a/content/v3/csidriver/partners/redhat.md +++ b/content/v3/csidriver/partners/redhat.md @@ -5,7 +5,7 @@ weight: 3 description: > Installing the certified Dell CSI Operator on OpenShift --- -The Dell EMC CSI Drivers support Red Hat OpenShift. Please see the [Supported Platforms](../../#features-and-capabilities) table for more details. +The Dell CSI Drivers support Red Hat OpenShift. Please see the [Supported Platforms](../../#features-and-capabilities) table for more details. The CSI drivers can be installed via Helm charts or Dell CSI Operator. The Dell CSI Operator allows for easy installation of the driver via the Openshift UI. The process to install the Operator via the OpenShift UI can be found below. diff --git a/content/v3/csidriver/partners/tanzu.md b/content/v3/csidriver/partners/tanzu.md index a6d903c580..393f5b398f 100644 --- a/content/v3/csidriver/partners/tanzu.md +++ b/content/v3/csidriver/partners/tanzu.md @@ -3,7 +3,7 @@ title: "VMware Tanzu" Description: "About VMware Tanzu basic" --- -The CSI Driver for Dell EMC Unity and PowerScale supports VMware Tanzu and deployment of these Tanzu clusters is done using the VMware Tanzu supervisor cluster and supervisor namespace. +The CSI Driver for Dell Unity and PowerScale supports VMware Tanzu and deployment of these Tanzu clusters is done using the VMware Tanzu supervisor cluster and supervisor namespace. Currently, VMware Tanzu with normal configuration(without NAT) supports Kubernetes 1.20 and higher. The CSI driver can be installed on this cluster using Helm. Installation of CSI drivers in Tanzu via Operator has not been qualified. diff --git a/content/v3/csidriver/release/operator.md b/content/v3/csidriver/release/operator.md index b351182618..4451adff9d 100644 --- a/content/v3/csidriver/release/operator.md +++ b/content/v3/csidriver/release/operator.md @@ -3,22 +3,20 @@ title: Operator description: Release notes for Dell CSI Operator --- -## Release Notes - Dell CSI Operator 1.6.0 +## Release Notes - Dell CSI Operator 1.7.0 ->**Note:** There will be a delay in certification of Dell CSI Operator 1.6.0 and it will not be available for download from the Red Hat OpenShift certified catalog right away. The operator will still be available for download from the Red Hat OpenShift Community Catalog soon after the 1.6.0 release. +>**Note:** There will be a delay in certification of Dell CSI Operator 1.7.0 and it will not be available for download from the Red Hat OpenShift certified catalog right away. The operator will still be available for download from the Red Hat OpenShift Community Catalog soon after the 1.7.0 release. ### New Features/Changes -- Added support for OpenShift v4.9. +- Added support for Kubernetes 1.23. ### Fixed Issues There are no fixed issues in this release. ### Known Issues -| Issue | Workaround | -|-------|------------| -| A warning message will be listed in the events for cluster scoped objects if the driver is not upgraded after an operator upgrade. This happens because of the fix provided by Kubernetes in 1.20 for one of the known [issue](https://github.com/kubernetes/kubernetes/issues/65200). | After an operator upgrade, the objects will get updated automatically after 45 mins in case of no driver upgrade. | +There are no known issues in this release. ### Support -The Dell CSI Operator image is available on Dockerhub and is officially supported by Dell EMC. +The Dell CSI Operator image is available on Dockerhub and is officially supported by Dell. For any CSI operator and driver issues, questions or feedback, please follow our [support process](../../../support/). diff --git a/content/v3/csidriver/release/powerflex.md b/content/v3/csidriver/release/powerflex.md index c9dbd901ec..eabc638190 100644 --- a/content/v3/csidriver/release/powerflex.md +++ b/content/v3/csidriver/release/powerflex.md @@ -3,18 +3,11 @@ title: PowerFlex description: Release notes for PowerFlex CSI driver --- -## Release Notes - CSI PowerFlex v2.1.0 +## Release Notes - CSI PowerFlex v2.2.0 ### New Features/Changes -- Added support for OpenShift v4.9. -- Added support for CSI spec 1.5. -- Added support for new access modes in CSI Spec 1.5. -- Added support for PV/PVC metrics. -- Added support for CSM Authorization sidecar via Helm. -- Added v1 extensions to vg snaphot from v1alpha2. -- Added support to update helm charts to do a helm install without shell scripts. -- Added support for volume health monitoring -- Removed support for Fedora CoreOS +- Added support for Kubernetes 1.23. +- Added support for Amazon Elastic Kubernetes Service Anywhere. ### Fixed Issues @@ -28,4 +21,4 @@ There are no fixed issues in this release. ### Note: -- Support for kurbernetes alpha features like Volume Health Monitoring and RWOP (ReadWriteOncePod) access mode introduced in the release will not be available in Openshift environment as Openshift doesn't support enabling of alpha features for Production Grade clusters. +- Support for Kubernetes alpha features like Volume Health Monitoring and RWOP (ReadWriteOncePod) access mode will not be available in Openshift environment as Openshift doesn't support enabling of alpha features for Production Grade clusters. diff --git a/content/v3/csidriver/release/powermax.md b/content/v3/csidriver/release/powermax.md index 278d297ab0..52c67cf950 100644 --- a/content/v3/csidriver/release/powermax.md +++ b/content/v3/csidriver/release/powermax.md @@ -3,13 +3,12 @@ title: PowerMax description: Release notes for PowerMax CSI driver --- -## Release Notes - CSI PowerMax v2.1.0 +## Release Notes - CSI PowerMax v2.2.0 ### New Features/Changes -- Added support for OpenShift v4.9. -- Added support for CSI spec 1.5. -- Added v2 suffix to the module names. -- Added support for CSM Authorization sidecar via Helm +- Added support for new access modes in CSI Spec 1.5. +- Added support for Volume Health Monitoring. +- Added support for Kubernetes 1.23. ### Fixed Issues There are no fixed issues in this release. @@ -21,6 +20,8 @@ There are no fixed issues in this release. | Delete Volume fails with the error message: volume is part of masking view | This issue is due to limitations in Unisphere and occurs when Unisphere is overloaded. Currently, there is no workaround for this but it can be avoided by ensuring that Unisphere is not overloaded during such operations. The Unisphere team is assessing a fix for this in a future Unisphere release| | Getting initiators list fails with context deadline error | The following error can occur during the driver installation if a large number of initiators are present on the array. There is no workaround for this but it can be avoided by deleting stale initiators on the array| | Unable to update Host: A problem occurred modifying the host resource | This issue occurs when the nodes do not have unique hostnames or when an IP address/FQDN with same sub-domains are used as hostnames. The workaround is to use unique hostnames or FQDN with unique sub-domains| +| GetSnapVolumeList fails with context deadline error | The following error can occur if a large number of snapshots are present on the array. There is no workaround for this but it can be avoided by deleting unused snapshots on the array| +### Note: - +- Support for Kubernetes alpha features like Volume Health Monitoring and RWOP (ReadWriteOncePod) access mode introduced in the release will not be available in Openshift environment as Openshift doesn't support enabling of alpha features for Production Grade clusters. diff --git a/content/v3/csidriver/release/powerscale.md b/content/v3/csidriver/release/powerscale.md index c70d9f111f..ff2a38a5eb 100644 --- a/content/v3/csidriver/release/powerscale.md +++ b/content/v3/csidriver/release/powerscale.md @@ -3,29 +3,28 @@ title: PowerScale description: Release notes for PowerScale CSI driver --- -## Release Notes - CSI Driver for PowerScale v2.1.0 +## Release Notes - CSI Driver for PowerScale v2.2.0 ### New Features/Changes -- Added support for OpenShift v4.9. -- Added support for CSI spec 1.5. -- Added support for new access modes in CSI Spec 1.5. -- Added support for PV/PVC metrics. -- Added ability to accept leader election timeout flags. -- Added support for Dell EMC PowerScale 9.3. -- Added support for volume health monitoring. +- Added support for Replication. +- Added support for Kubernetes 1.23. +- Added support to configure fsGroupPolicy. +- Added support for session based authentication along with basic authentication for PowerScale. ### Fixed Issues -There are no fixed issues in this release. +- CSI Driver installation fails with the error message "error getting FQDN". ### Known Issues -| Issue | Resolution or workaround, if known | -| ------------------------------------------------------------ | ------------------------------------------------------------ | -| If the length of the nodeID exceeds 128 characters, the driver fails to update the CSINode object and installation fails. This is due to a limitation set by CSI spec which doesn't allow nodeID to be greater than 128 characters. | The CSI PowerScale driver uses the hostname for building the nodeID which is set in the CSINode resource object, hence we recommend not having very long hostnames in order to avoid this issue. This current limitation of 128 characters is likely to be relaxed in future Kubernetes versions as per this issue in the community: https://github.com/kubernetes-sigs/gcp-compute-persistent-disk-csi-driver/issues/581

**Note:** In kubernetes 1.22 this limit has been relaxed to 192 characters.| -| If some older NFS exports /terminated worker nodes still in NFS export client list, CSI driver tries to add a new worker node it fails (For RWX volume). | User need to manually clean the export client list from old entries to make successful additon of new worker nodes. -| Delete namespace that has PVCs and pods created with the driver. The External health monitor sidecar crashes as a result of this operation.| Deleting the namespace deletes the PVCs first and then removes the pods in the namespace. This brings a condition where pods exist without their PVCs and causes the external-health-monitor sidecar to crash. This is a known issue and has been reported at https://github.com/kubernetes-csi/external-health-monitor/issues/100| +| Issue | Resolution or workaround, if known | +|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| +| If the length of the nodeID exceeds 128 characters, the driver fails to update the CSINode object and installation fails. This is due to a limitation set by CSI spec which doesn't allow nodeID to be greater than 128 characters. | The CSI PowerScale driver uses the hostname for building the nodeID which is set in the CSINode resource object, hence we recommend not having very long hostnames in order to avoid this issue. This current limitation of 128 characters is likely to be relaxed in future Kubernetes versions as per this issue in the community: https://github.com/kubernetes-sigs/gcp-compute-persistent-disk-csi-driver/issues/581

**Note:** In kubernetes 1.22 this limit has been relaxed to 192 characters. | +| If some older NFS exports /terminated worker nodes still in NFS export client list, CSI driver tries to add a new worker node it fails (For RWX volume). | User need to manually clean the export client list from old entries to make successful addition of new worker nodes. | +| Delete namespace that has PVCs and pods created with the driver. The External health monitor sidecar crashes as a result of this operation. | Deleting the namespace deletes the PVCs first and then removes the pods in the namespace. This brings a condition where pods exist without their PVCs and causes the external-health-monitor sidecar to crash. This is a known issue and has been reported at https://github.com/kubernetes-csi/external-health-monitor/issues/100 | +| fsGroupPolicy may not work as expected without root privileges for NFS only
https://github.com/kubernetes/examples/issues/260 | To get the desired behavior set "RootClientEnabled" = "true" in the storage class parameter | + ### Note: -- Support for kurbernetes alpha features like Volume Health Monitoring and RWOP (ReadWriteOncePod) access mode introduced in the release will not be available in Openshift environment as Openshift doesn't support enabling of alpha features for Production Grade clusters. +- Support for Kubernetes alpha features like Volume Health Monitoring and RWOP (ReadWriteOncePod) access mode will not be available in Openshift environment as Openshift doesn't support enabling of alpha features for Production Grade clusters. diff --git a/content/v3/csidriver/release/powerstore.md b/content/v3/csidriver/release/powerstore.md index 173d236038..c624c9c509 100644 --- a/content/v3/csidriver/release/powerstore.md +++ b/content/v3/csidriver/release/powerstore.md @@ -3,15 +3,14 @@ title: PowerStore description: Release notes for PowerStore CSI driver --- -## Release Notes - CSI PowerStore v2.1.0 +## Release Notes - CSI PowerStore v2.2.0 ### New Features/Changes -- Added support for OpenShift v4.9. -- Added support for CSI spec 1.5. -- Added support for new access modes in CSI Spec 1.5. -- Added support for PV/PVC metrics. -- Added support for volume health monitoring. +- Added support for NVMe/TCP protocol. +- Added support for Kubernetes 1.23. +- Added support to configure fsGroupPolicy. +- Added support for configuring permissions using POSIX mode bits and NFSv4 ACLs on NFS mount directory. ### Fixed Issues @@ -19,10 +18,11 @@ There are no fixed issues in this release. ### Known Issues -| Issue | Resolution or workaround, if known | -| ------------------------------------------------------------ | ------------------------------------------------------------ | -| Delete namespace that has PVCs and pods created with the driver. The External health monitor sidecar crashes as a result of this operation | Deleting the namespace deletes the PVCs first and then removes the pods in the namespace. This brings a condition where pods exist without their PVCs and causes the external-health-monitor sidecar to crash. This is a known issue and has been reported at https://github.com/kubernetes-csi/external-health-monitor/issues/100
| +| Issue | Resolution or workaround, if known | +|--------------------------------------------------------------------------------------------------------------------------------------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| +| Delete namespace that has PVCs and pods created with the driver. The External health monitor sidecar crashes as a result of this operation | Deleting the namespace deletes the PVCs first and then removes the pods in the namespace. This brings a condition where pods exist without their PVCs and causes the external-health-monitor sidecar to crash. This is a known issue and has been reported at https://github.com/kubernetes-csi/external-health-monitor/issues/100
| +| fsGroupPolicy may not work as expected without root privileges for NFS only
https://github.com/kubernetes/examples/issues/260 | To get the desired behavior set "allowRoot: "true" in the storage class parameter | ### Note: -- Support for kurbernetes alpha features like Volume Health Monitoring and RWOP (ReadWriteOncePod) access mode introduced in the release will not be available in Openshift environment as Openshift doesn't support enabling of alpha features for Production Grade clusters. +- Support for Kubernetes alpha features like Volume Health Monitoring and RWOP (ReadWriteOncePod) access mode will not be available in Openshift environment as Openshift doesn't support enabling of alpha features for Production Grade clusters. diff --git a/content/v3/csidriver/release/unity.md b/content/v3/csidriver/release/unity.md index 62fcf772e0..87517e3703 100644 --- a/content/v3/csidriver/release/unity.md +++ b/content/v3/csidriver/release/unity.md @@ -3,15 +3,12 @@ title: Unity description: Release notes for Unity CSI driver --- -## Release Notes - CSI Unity v2.1.0 +## Release Notes - CSI Unity v2.2.0 ### New Features/Changes -- Added support for OpenShift v4.9. -- Added support for CSI spec 1.5. -- Added support for new access modes in CSI Spec 1.5. -- Added ability to associate a tenant with storage volumes. - -- Added support for volume health monitoring. +- Added support for Kubernetes 1.23. +- Added support for Standalone Helm Charts. ### Fixed Issues @@ -26,4 +23,4 @@ description: Release notes for Unity CSI driver ### Note: -- Support for kurbernetes alpha features like Volume Health Monitoring and RWOP (ReadWriteOncePod) access mode introduced in the release will not be available in Openshift environment as Openshift doesn't support enabling of alpha features for Production Grade clusters. +- Support for Kubernetes alpha features like Volume Health Monitoring and RWOP (ReadWriteOncePod) access mode will not be available in Openshift environment as Openshift doesn't support enabling of alpha features for Production Grade clusters. diff --git a/content/v3/csidriver/troubleshooting/powerflex.md b/content/v3/csidriver/troubleshooting/powerflex.md index 4d701a1f19..5699c2ec98 100644 --- a/content/v3/csidriver/troubleshooting/powerflex.md +++ b/content/v3/csidriver/troubleshooting/powerflex.md @@ -14,11 +14,12 @@ description: Troubleshooting PowerFlex Driver |CreateVolume error System is not configured in the driver | Powerflex name if used for systemID in StorageClass ensure same name is also used in array config systemID | |Defcontext mount option seems to be ignored, volumes still are not being labeled correctly.|Ensure SElinux is enabled on a worker node, and ensure your container run time manager is properly configured to be utilized with SElinux.| |Mount options that interact with SElinux are not working (like defcontext).|Check that your container orchestrator is properly configured to work with SElinux.| -|Installation of the driver on Kubernetes v1.20/v1.21/v1.22 fails with the following error:
```Error: unable to build kubernetes objects from release manifest: unable to recognize "": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"```|Kubernetes v1.20/1.21/v1.22 requires v1 version of snapshot CRDs to be created in cluster, see the [Volume Snapshot Requirements](../../installation/helm/powerflex/#optional-volume-snapshot-requirements)| +|Installation of the driver on Kubernetes v1.21/v1.22/v1.23 fails with the following error:
```Error: unable to build kubernetes objects from release manifest: unable to recognize "": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"```|Kubernetes v1.21/v1.22/v1.23 requires v1 version of snapshot CRDs to be created in cluster, see the [Volume Snapshot Requirements](../../installation/helm/powerflex/#optional-volume-snapshot-requirements)| | The `kubectl logs -n vxflexos vxflexos-controller-* driver` logs show `x509: certificate signed by unknown authority` |A self assigned certificate is used for PowerFlex array. See [certificate validation for PowerFlex Gateway](../../installation/helm/powerflex/#certificate-validation-for-powerflex-gateway-rest-api-calls)| | When you run the command `kubectl apply -f snapclass-v1.yaml`, you get the error `error: unable to recognize "snapclass-v1.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"` | Check to make sure that the v1 snapshotter CRDs are installed, and not the v1beta1 CRDs, which are no longer supported. | | The controller pod is stuck and producing errors such as" `Failed to watch *v1.VolumeSnapshotContent: failed to list *v1.VolumeSnapshotContent: the server could not find the requested resource (get volumesnapshotcontents.snapshot.storage.k8s.io)` | Make sure that v1 snapshotter CRDs and v1 snapclass are installed, and not v1beta1, which is no longer supported. | - +| Driver install or upgrade fails because of an incompatible Kubernetes version, even though the version seems to be within the range of compatibility. For example: `Error: UPGRADE FAILED: chart requires kubeVersion: >= 1.21.0 <= 1.23.0 which is incompatible with Kubernetes V1.21.11-mirantis-1` | If you are using an extended Kubernetes version, please see the helm Chart at `helm/csi-vxflexos/Chart.yaml` and use the alternate `kubeVersion` check that is provided in the comments. *Please note* that this is not meant to be used to enable the use of pre-release alpha and beta versions, which is not supported. | +| Volume metrics are missing | Enable [Volume Health Monitoring](../../features/powerflex#volume-health-monitoring) | >*Note*: `vxflexos-controller-*` is the controller pod that acquires leader lease diff --git a/content/v3/csidriver/troubleshooting/powerscale.md b/content/v3/csidriver/troubleshooting/powerscale.md index 06ed1754b4..e3f233a76c 100644 --- a/content/v3/csidriver/troubleshooting/powerscale.md +++ b/content/v3/csidriver/troubleshooting/powerscale.md @@ -10,9 +10,11 @@ Here are some installation failures that might be encountered and how to mitigat |The `kubectl logs isilon-controller-0 -n isilon -c driver` logs shows the driver **cannot authenticate** | Check your secret's username and password for corresponding cluster | |The `kubectl logs isilon-controller-0 -n isilon -c driver` logs shows the driver failed to connect to the Isilon because it **couldn't verify the certificates** | Check the isilon-certs- secret and ensure it is not empty and it has the valid certificates. Set `isiInsecure: "true"` for insecure connection. SSL validation is recommended in the production environment. | |The `kubectl logs isilon-controller-0 -n isilon -c driver` logs shows the driver error: **create volume failed, Access denied. create directory as requested** | This situation can happen when the user who created the base path is different from the user configured for the driver. Make sure the user used to deploy CSI-Driver must have enough rights on the base path (i.e. isiPath) to perform all operations. | -|Volume/filesystem is allowed to mount by any host in the network, though that host is not a part of the export of that particular volume under /ifs directory | "Dell EMC PowerScale: OneFS NFS Design Considerations and Best Practices":
There is a default shared directory (ifs) of OneFS, which lets clients running Windows, UNIX, Linux, or Mac OS X access the same directories and files. It is recommended to disable the ifs shared directory in a production environment and create dedicated NFS exports and SMB shares for your workload. | +|Volume/filesystem is allowed to mount by any host in the network, though that host is not a part of the export of that particular volume under /ifs directory | "Dell PowerScale: OneFS NFS Design Considerations and Best Practices":
There is a default shared directory (ifs) of OneFS, which lets clients running Windows, UNIX, Linux, or Mac OS X access the same directories and files. It is recommended to disable the ifs shared directory in a production environment and create dedicated NFS exports and SMB shares for your workload. | | Creating snapshot fails if the parameter IsiPath in volume snapshot class and related storage class is not the same. The driver uses the incorrect IsiPath parameter and tries to locate the source volume due to the inconsistency. | Ensure IsiPath in VolumeSnapshotClass yaml and related storageClass yaml are the same. | | While deleting a volume, if there are files or folders created on the volume that are owned by different users. If the Isilon credentials used are for a nonprivileged Isilon user, the delete volume action fails. It is due to the limitation in Linux permission control. | To perform the delete volume action, the user account must be assigned a role that has the privilege ISI_PRIV_IFS_RESTORE. The user account must have the following set of privileges to ensure that all the CSI Isilon driver capabilities work properly:
* ISI_PRIV_LOGIN_PAPI
* ISI_PRIV_NFS
* ISI_PRIV_QUOTA
* ISI_PRIV_SNAPSHOT
* ISI_PRIV_IFS_RESTORE
* ISI_PRIV_NS_IFS_ACCESS
In some cases, ISI_PRIV_BACKUP is also required, for example, when files owned by other users have mode bits set to 700. | | If the hostname is mapped to loopback IP in /etc/hosts file, and pods are created using 1.3.0.1 release, after upgrade to driver version 1.4.0 or later there is a possibility of "localhost" as a stale entry in export | Recommended setup: User should not map a hostname to loopback IP in /etc/hosts file | -| CSI Driver installation fails with the error message "error getting FQDN". | Map IP address of host with its FQDN in /etc/hosts file. | | Driver node pod is in "CrashLoopBackOff" as "Node ID" generated is not with proper FQDN. | This might be due to "dnsPolicy" implemented on the driver node pod which may differ with different networks.

This parameter is configurable in both helm and Operator installer and the user can try with different "dnsPolicy" according to the environment.| +| The `kubectl logs isilon-controller-0 -n isilon -c driver` logs shows the driver **Authentication failed. Trying to re-authenticate** when using Session-based authentication | The issue has been resolved from OneFS 9.3 onwards, for OneFS versions prior to 9.3 for session-based authentication either smart connect can be created against a single node of Isilon or CSI Driver can be installed/pointed to a particular node of the Isilon else basic authentication can be used by setting isiAuthType in `values.yaml` to 0 | +| When an attempt is made to create more than one ReadOnly PVC from the same volume snapshot, the second and subsequent requests result in PVCs in state `Pending`, with a warning `another RO volume from this snapshot is already present`. This is because the driver allows only one RO volume from a specific snapshot at any point in time. This is to allow faster creation(within a few seconds) of a RO PVC from a volume snapshot irrespective of the size of the volume snapshot. | Wait for the deletion of the first RO PVC created from the same volume snapshot. | +| While attaching a ReadOnly PVC from a volume snapshot to a pod, the mount operation will fail with error `mounting ... failed, reason given by server: No such file or directory`, if RO volume's access zone(non System access zone) on Isilon is configured with a dedicated service IP(which is same as `AzServiceIP` storage class parameter). This operation results in accessing the snapshot base directory(`/ifs`) and results in overstepping the RO volume's access zone's base directory, which the OneFS doesn't allow. | Provide a service ip that belongs to RO volume's access zone which set the highest level `/ifs` as its zone base directory. | diff --git a/content/v3/csidriver/troubleshooting/powerstore.md b/content/v3/csidriver/troubleshooting/powerstore.md index 5f9f9e74c8..2de1b8de02 100644 --- a/content/v3/csidriver/troubleshooting/powerstore.md +++ b/content/v3/csidriver/troubleshooting/powerstore.md @@ -7,5 +7,5 @@ description: Troubleshooting PowerStore Driver | --- | --- | | When you run the command `kubectl describe pods powerstore-controller- –n csi-powerstore`, the system indicates that the driver image could not be loaded. | - If on Kubernetes, edit the daemon.json file found in the registry location and add `{ "insecure-registries" :[ "hostname.cloudapp.net:5000" ] }`
- If on OpenShift, run the command `oc edit image.config.openshift.io/cluster` and add registries to yaml file that is displayed when you run the command.| | The `kubectl logs -n csi-powerstore powerstore-node-` driver logs show that the driver can't connect to PowerStore API. | Check if you've created a secret with correct credentials | -|Installation of the driver on Kubernetes supported versions fails with the following error:
```Error: unable to build kubernetes objects from release manifest: unable to recognize "": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"```|Kubernetes v1.20/1.21/v1.22 requires v1 version of snapshot CRDs to be created in cluster, see the [Volume Snapshot Requirements](../../installation/helm/powerstore/#optional-volume-snapshot-requirements)| +|Installation of the driver on Kubernetes supported versions fails with the following error:
```Error: unable to build kubernetes objects from release manifest: unable to recognize "": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"```|Kubernetes v1.21/v1.22/v1.23 requires v1 version of snapshot CRDs to be created in cluster, see the [Volume Snapshot Requirements](../../installation/helm/powerstore/#optional-volume-snapshot-requirements)| | If PVC is not getting created and getting the following error in PVC description:
```failed to provision volume with StorageClass "powerstore-iscsi": rpc error: code = Internal desc = : Unknown error:```| Check if you've created a secret with correct credentials | diff --git a/content/v3/csidriver/troubleshooting/unity.md b/content/v3/csidriver/troubleshooting/unity.md index 4091313390..447b218737 100644 --- a/content/v3/csidriver/troubleshooting/unity.md +++ b/content/v3/csidriver/troubleshooting/unity.md @@ -12,4 +12,5 @@ description: Troubleshooting Unity Driver | Dynamic array detection will not work in Topology based environment | Whenever a new array is added or removed, then the driver controller and node pod should be restarted with command **kubectl get pods -n unity --no-headers=true \| awk '/unity-/{print $1}'\| xargs kubectl delete -n unity pod** when **topology-based storage classes are used**. For dynamic array addition without topology, the driver will detect the newly added or removed arrays automatically| | If source PVC is deleted when cloned PVC exists, then source PVC will be deleted in the cluster but on array, it will still be present and marked for deletion. | All the cloned PVC should be deleted in order to delete the source PVC from the array. | | PVC creation fails on a fresh cluster with **iSCSI** and **NFS** protocols alone enabled with error **failed to provision volume with StorageClass "unity-iscsi": error generating accessibility requirements: no available topology found**. | This is because iSCSI initiator login takes longer than the node pod startup time. This can be overcome by bouncing the node pods in the cluster using the below command the driver pods with **kubectl get pods -n unity --no-headers=true \| awk '/unity-/{print $1}'\| xargs kubectl delete -n unity pod** | +| Driver install or upgrade fails because of an incompatible Kubernetes version, even though the version seems to be within the range of compatibility. For example: `Error: UPGRADE FAILED: chart requires kubeVersion: >= 1.21.0 <= 1.23.0 which is incompatible with Kubernetes V1.21.11-mirantis-1` | If you are using an extended Kubernetes version, please see the helm Chart at `helm/csi-unity/Chart.yaml` and use the alternate `kubeVersion` check that is provided in the comments. *Please note* that this is not meant to be used to enable the use of pre-release alpha and beta versions, which is not supported. | diff --git a/content/v3/csidriver/upgradation/drivers/isilon.md b/content/v3/csidriver/upgradation/drivers/isilon.md index 76ef431580..e473a299e4 100644 --- a/content/v3/csidriver/upgradation/drivers/isilon.md +++ b/content/v3/csidriver/upgradation/drivers/isilon.md @@ -6,36 +6,27 @@ tags: weight: 1 Description: Upgrade PowerScale CSI driver --- -You can upgrade the CSI Driver for Dell EMC PowerScale using Helm or Dell CSI Operator. +You can upgrade the CSI Driver for Dell PowerScale using Helm or Dell CSI Operator. -## Upgrade Driver from version 2.0.0 to 2.1.0 +## Upgrade Driver from version 2.1.0 to 2.2.0 using Helm **Note:** While upgrading the driver via helm, controllerCount variable in myvalues.yaml can be at most one less than the number of worker nodes. **Steps** -1. Verify that all pre-requisites to install CSI Driver for Dell EMC PowerScale version 2.1.0 are fulfilled. Note that change in secret format should be implemented. - - Delete the existing secret (isilon-creds and isilon-certs-0) - - Create new secrets (isilon-creds and isilon-certs-0) - Refer Installation section [here](./../../../installation/helm/isilon/#install-the-driver). -2. Clone the repository using `git clone -b v2.1.0 https://github.com/dell/csi-powerscale.git`, copy the helm/csi-isilon/values.yaml into a new location with a custom name say _my-isilon-settings.yaml_, to customize settings for installation. Edit _my-isilon-settings.yaml_ as per the requirements. -3. Change to directory dell-csi-helm-installer to install the Dell EMC PowerScale `cd dell-csi-helm-installer` -4. Upgrade the CSI Driver for Dell EMC PowerScale version 2.1.0 using following command: +1. Clone the repository using `git clone -b v2.2.0 https://github.com/dell/csi-powerscale.git`, copy the helm/csi-isilon/values.yaml into a new location with a custom name say _my-isilon-settings.yaml_, to customize settings for installation. Edit _my-isilon-settings.yaml_ as per the requirements. +2. Change to directory dell-csi-helm-installer to install the Dell PowerScale `cd dell-csi-helm-installer` +3. Upgrade the CSI Driver for Dell PowerScale using following command: `./csi-install.sh --namespace isilon --values ./my-isilon-settings.yaml --upgrade` ## Upgrade using Dell CSI Operator: +**Notes:** +1. While upgrading the driver via operator, replicas count in sample CR yaml can be at most one less than the number of worker nodes. +2. Upgrading the Operator does not upgrade the CSI Driver. -**Note:** While upgrading the driver via operator, replicas count in sample CR yaml can be at most one less than the number of worker nodes. +To upgrade the driver: -To upgrade the driver from version 2.0.0 to 2.1.0: - -Note: It is highly recommended to take *Backup of existing storage class definition and volumesnapshot class definition, yaml files* before the upgrade. - -1. Clone the [Dell CSI Operator repository](https://github.com/dell/dell-csi-operator). - -2. Execute `bash scripts/install.sh --upgrade` . This command will install the latest version of the operator. ->Note: Dell CSI Operator version 1.4.0 and higher would install to the 'dell-csi-operator' namespace by default. - -3. To upgrade the driver, refer [here](./../../../installation/operator/#update-csi-drivers). +1. Please upgrade the Dell CSI Operator by following [here](./../operator). +2. Once the operator is upgraded, to upgrade the driver, refer [here](./../../../installation/operator/#update-csi-drivers). diff --git a/content/v3/csidriver/upgradation/drivers/operator.md b/content/v3/csidriver/upgradation/drivers/operator.md index 2a4bf38b68..0cfbc9355e 100644 --- a/content/v3/csidriver/upgradation/drivers/operator.md +++ b/content/v3/csidriver/upgradation/drivers/operator.md @@ -6,17 +6,25 @@ tags: weight: 1 Description: Upgrade Dell CSI Operator --- -To upgrade Dell CSI Operator from v1.2.0/v1.3.0 to v1.4.0/v1.5.0/v1.6.0, perform the following steps. +To upgrade Dell CSI Operator, perform the following steps. +Dell CSI Operator can be upgraded based on the supported platforms in one of the 2 ways: +1. Using script (for non-OLM based installation) +2. Using Operator Lifecycle Manager (OLM) + ### Using Installation Script -Run the following command to upgrade the operator -``` -$ bash scripts/install.sh --upgrade -``` +1. Clone the [Dell CSI Operator repository](https://github.com/dell/dell-csi-operator). +2. cd dell-csi-operator +3. git checkout dell-csi-operator-'your-version' +4. Execute `bash scripts/install.sh --upgrade` . This command will install the latest version of the operator. +>Note: Dell CSI Operator version 1.4.0 and higher would install to the 'dell-csi-operator' namespace by default. ### Using OLM The upgrade of the Dell CSI Operator is done via Operator Lifecycle Manager. -If the `InstallPlan` for the Operator subscription is set to `Automatic`, the operator will be automatically upgraded to the new version. If the `InstallPlan` is set to `Manual`, then a Cluster Administrator would need to approve the upgrade. + +The `Update approval` (**`InstallPlan`** in OLM terms) strategy plays a role while upgrading dell-csi-operator on OpenShift. This option can be set during installation of dell-csi-operator on OpenShift via the console and can be either set to `Manual` or `Automatic`. + - If the **`Update approval`** is set to `Automatic`, OpenShift automatically detects whenever the latest version of dell-csi-operator is available in the **`Operator hub`**, and upgrades it to the latest available version. + - If the upgrade policy is set to `Manual`, OpenShift notifies of an available upgrade. This notification can be viewed by the user in the **`Installed Operators`** section of the OpenShift console. Clicking on the hyperlink to `Approve` the installation would trigger the dell-csi-operator upgrade process. **NOTE**: The recommended version of OLM for Upstream Kubernetes is **`v0.18.3`** when upgrading operator to `v1.5.0`. diff --git a/content/v3/csidriver/upgradation/drivers/powerflex.md b/content/v3/csidriver/upgradation/drivers/powerflex.md index 626538f5f9..0611b63233 100644 --- a/content/v3/csidriver/upgradation/drivers/powerflex.md +++ b/content/v3/csidriver/upgradation/drivers/powerflex.md @@ -8,14 +8,14 @@ weight: 1 Description: Upgrade PowerFlex CSI driver --- -You can upgrade the CSI Driver for Dell EMC PowerFlex using Helm or Dell CSI Operator. +You can upgrade the CSI Driver for Dell PowerFlex using Helm or Dell CSI Operator. -## Update Driver from v2.0 to v2.1 using Helm +## Update Driver from v2.1 to v2.2 using Helm **Steps** -1. Run `git clone -b v2.1.0 https://github.com/dell/csi-powerflex.git` to clone the git repository and get the v2.0 driver. +1. Run `git clone -b v2.2.0 https://github.com/dell/csi-powerflex.git` to clone the git repository and get the v2.2.0 driver. 2. You need to create config.yaml with the configuration of your system. Check this section in installation documentation: [Install the Driver](../../../installation/helm/powerflex#install-the-driver) - You must set the only system managed in v1.5/v2.0 driver as default in config.json in v2.1 so that the driver knows the existing volumes belong to that system. + You must set the only system managed in v1.5/v2.0/v2.1 driver as default in config.json in v2.2 so that the driver knows the existing volumes belong to that system. 3. Update values file as needed. 4. Run the `csi-install` script with the option _\-\-upgrade_ by running: `cd ../dell-csi-helm-installer && ./csi-install.sh --namespace vxflexos --values ./myvalues.yaml --upgrade`. @@ -25,10 +25,8 @@ You can upgrade the CSI Driver for Dell EMC PowerFlex using Helm or Dell CSI Ope - The logging configuration from v1.5 will not work in v2.1, since the log configuration parameters are now set in the values.yaml file located at helm/csi-vxflexos/values.yaml. Please set the logging configuration parameters in the values.yaml file. ## Upgrade using Dell CSI Operator: -1. Clone the [Dell CSI Operator repository](https://github.com/dell/dell-csi-operator). +**Note:** Upgrading the Operator does not upgrade the CSI Driver. -2. Execute `bash scripts/install.sh --upgrade` -This command will install the latest version of the operator. ->Note: Dell CSI Operator version 1.4.0 and higher would install to the 'dell-csi-operator' namespace by default. +1. Please upgrade the Dell CSI Operator by following [here](./../operator). +2. Once the operator is upgraded, to upgrade the driver, refer [here](./../../../installation/operator/#update-csi-drivers). -3. To upgrade the driver, refer [here](./../../../installation/operator/#update-csi-drivers). diff --git a/content/v3/csidriver/upgradation/drivers/powermax.md b/content/v3/csidriver/upgradation/drivers/powermax.md index 82d68f6759..1f2ba76421 100644 --- a/content/v3/csidriver/upgradation/drivers/powermax.md +++ b/content/v3/csidriver/upgradation/drivers/powermax.md @@ -8,12 +8,12 @@ weight: 1 Description: Upgrade PowerMax CSI driver --- -You can upgrade CSI Driver for Dell EMC PowerMax using Helm or Dell CSI Operator. +You can upgrade CSI Driver for Dell PowerMax using Helm or Dell CSI Operator. -## Update Driver from v2.0 to v2.1 using Helm +## Update Driver from v2.1 to v2.2 using Helm **Steps** -1. Run `git clone -b v2.1.0 https://github.com/dell/csi-powermax.git` to clone the git repository and get the v2.1 driver. +1. Run `git clone -b v2.2.0 https://github.com/dell/csi-powermax.git` to clone the git repository and get the v2.2 driver. 2. Update the values file as needed. 2. Run the `csi-install` script with the option _\-\-upgrade_ by running: `cd ../dell-csi-helm-installer && ./csi-install.sh --namespace powermax --values ./my-powermax-settings.yaml --upgrade`. @@ -22,11 +22,8 @@ You can upgrade CSI Driver for Dell EMC PowerMax using Helm or Dell CSI Operator - To update any installation parameter after the driver has been installed, change the `my-powermax-settings.yaml` file and run the install script with the option _\-\-upgrade_, for example: `./csi-install.sh --namespace powermax --values ./my-powermax-settings.yaml –upgrade`. ## Upgrade using Dell CSI Operator: +**Note:** Upgrading the Operator does not upgrade the CSI Driver. -1. Clone the [Dell CSI Operator repository](https://github.com/dell/dell-csi-operator). +1. Please upgrade the Dell CSI Operator by following [here](./../operator). +2. Once the operator is upgraded, to upgrade the driver, refer [here](./../../../installation/operator/#update-csi-drivers). -2. Execute `bash scripts/install.sh --upgrade` -This command installs the latest version of the operator. ->Note: Dell CSI Operator version 1.4.0 and later installs to the 'dell-csi-operator' namespace by default. - -3. To upgrade the driver, see [here](./../../../installation/operator/#update-csi-drivers). diff --git a/content/v3/csidriver/upgradation/drivers/powerstore.md b/content/v3/csidriver/upgradation/drivers/powerstore.md index 96be7e3630..7f5152bd3f 100644 --- a/content/v3/csidriver/upgradation/drivers/powerstore.md +++ b/content/v3/csidriver/upgradation/drivers/powerstore.md @@ -7,38 +7,39 @@ weight: 1 Description: Upgrade PowerStore CSI driver --- -You can upgrade the CSI Driver for Dell EMC PowerStore using Helm or Dell CSI Operator. +You can upgrade the CSI Driver for Dell PowerStore using Helm or Dell CSI Operator. -## Update Driver from v2.0 to v2.1 using Helm +## Update Driver from v2.1 to v2.2 using Helm Note: While upgrading the driver via helm, controllerCount variable in myvalues.yaml can be at most one less than the number of worker nodes. **Steps** -1. Run `git clone -b v2.1.0 https://github.com/dell/csi-powerstore.git` to clone the git repository and get the driver. +1. Run `git clone -b v2.2.0 https://github.com/dell/csi-powerstore.git` to clone the git repository and get the driver. 2. Edit `helm/config.yaml` file and configure connection information for your PowerStore arrays changing the following parameters: - *endpoint*: defines the full URL path to the PowerStore API. - *globalID*: specifies what storage cluster the driver should use - *username*, *password*: defines credentials for connecting to array. - *skipCertificateValidation*: defines if we should use insecure connection or not. - *isDefault*: defines if we should treat the current array as a default. - - *blockProtocol*: defines what SCSI transport protocol we should use (FC, ISCSI, None, or auto). + - *blockProtocol*: defines what transport protocol we should use (FC, ISCSI, NVMeTCP, None, or auto). - *nasName*: defines what NAS should be used for NFS volumes. + - *nfsAcls*: (Optional) defines permissions - POSIX mode bits or NFSv4 ACLs, to be set on NFS target mount directory. + NFSv4 ACls are supported for NFSv4 shares on NFSv4 enabled NAS servers only. POSIX ACLs are not supported and only POSIX mode bits are supported for NFSv3 shares. Add more blocks similar to above for each PowerStore array if necessary. 3. (optional) create new storage classes using ones from `samples/storageclass` folder as an example and apply them to the Kubernetes cluster by running `kubectl create -f ` - >Storage classes created by v1.4/v2.0 driver will not be deleted, v2.1 driver will use default array to manage volumes provisioned with old storage classes. Thus, if you still have volumes provisioned by v1.4/v2.0 in your cluster then be sure to include the same array you have used for the v1.4/v2.0 driver and make it default in the `config.yaml` file. + >Storage classes created by v1.4/v2.0/v2.1 driver will not be deleted, v2.2 driver will use default array to manage volumes provisioned with old storage classes. Thus, if you still have volumes provisioned by v1.4/v2.0/v2.1 in your cluster then be sure to include the same array you have used for the v1.4/v2.0/v2.1 driver and make it default in the `config.yaml` file. 4. Create the secret by running ```kubectl create secret generic powerstore-config -n csi-powerstore --from-file=config=secret.yaml``` 5. Copy the default values.yaml file `cp ./helm/csi-powerstore/values.yaml ./dell-csi-helm-installer/my-powerstore-settings.yaml` and update parameters as per the requirement. 6. Run the `csi-install` script with the option _\-\-upgrade_ by running: `./dell-csi-helm-installer/csi-install.sh --namespace csi-powerstore --values ./my-powerstore-settings.yaml --upgrade`. ## Upgrade using Dell CSI Operator: -Note: While upgrading the driver via operator, replicas count in sample CR yaml can be at most one less than the number of worker nodes. +**Notes:** +1. While upgrading the driver via operator, replicas count in sample CR yaml can be at most one less than the number of worker nodes. +2. Upgrading the Operator does not upgrade the CSI Driver. -1. Clone the [Dell CSI Operator repository](https://github.com/dell/dell-csi-operator). -2. Execute `bash scripts/install.sh --upgrade` -This command will install the latest version of the operator. ->Note: Dell CSI Operator version 1.5.0 and higher would install to the 'dell-csi-operator' namespace by default. +1. Please upgrade the Dell CSI Operator by following [here](./../operator). +2. Once the operator is upgraded, to upgrade the driver, refer [here](./../../../installation/operator/#update-csi-drivers). -3. To upgrade the driver, refer [here](./../../../installation/operator/#update-csi-drivers). diff --git a/content/v3/csidriver/upgradation/drivers/unity.md b/content/v3/csidriver/upgradation/drivers/unity.md index 60b7f33440..23ee1340e1 100644 --- a/content/v3/csidriver/upgradation/drivers/unity.md +++ b/content/v3/csidriver/upgradation/drivers/unity.md @@ -7,37 +7,35 @@ weight: 1 Description: Upgrade Unity CSI driver --- -You can upgrade the CSI Driver for Dell EMC Unity using Helm or Dell CSI Operator. +You can upgrade the CSI Driver for Dell Unity using Helm or Dell CSI Operator. +**Note:** +1. User has to re-create existing custom-storage classes (if any) according to the latest format. +2. User has to create Volumesnapshotclass after upgrade for taking Snapshots. +3. Secret.yaml files can be updated according to Multiarray normalization parameters only after upgrading the driver. + ### Using Helm **Note:** While upgrading the driver via helm, controllerCount variable in myvalues.yaml can be at most one less than the number of worker nodes. Preparing myvalues.yaml is the same as explained in the install section. -To upgrade the driver from csi-unity v2.0 to csi-unity 2.1 +To upgrade the driver from csi-unity v2.1 to csi-unity 2.2 -1. Get the latest csi-unity 2.1 code from Github using using `git clone -b v2.1.0 https://github.com/dell/csi-unity.git`. +1. Get the latest csi-unity 2.2 code from Github using using `git clone -b v2.2.0 https://github.com/dell/csi-unity.git`. 2. Create myvalues.yaml. 3. Copy the helm/csi-unity/values.yaml to the new location csi-unity/dell-csi-helm-installer with name say myvalues.yaml, to customize settings for installation edit myvalues.yaml as per the requirements. 4. Navigate to common-helm-installer folder and execute the following command: `./csi-install.sh --namespace unity --values ./myvalues.yaml --upgrade` - -**Note:** -1. User has to re-create existing custom-storage classes (if any) according to the latest format. -2. User has to create Volumesnapshotclass after upgrade for taking Snapshots. -3. Secret.yaml files can be updated according to Multiarray Normalization parameters only after upgrading the driver. ### Using Operator -**Note:** While upgrading the driver via operator, replicas count in sample CR yaml can be at most one less than the number of worker nodes. - -To upgrade the driver from csi-unity v2.0 to csi-unity v2.1 : +**Notes:** +1. While upgrading the driver via operator, replicas count in sample CR yaml can be at most one less than the number of worker nodes. +2. Upgrading the Operator does not upgrade the CSI Driver. -1. Clone the [Dell CSI Operator repository](https://github.com/dell/dell-csi-operator). +To upgrade the driver: -2. Execute `bash scripts/install.sh --upgrade` -This command will install the latest version of the operator. ->Note: Dell CSI Operator version 1.4.0 and higher would install to the 'dell-csi-operator' namespace by default. +1. Please upgrade the Dell CSI Operator by following [here](./../operator). +2. Once the operator is upgraded, to upgrade the driver, refer [here](./../../../installation/operator/#update-csi-drivers). -3. To upgrade the driver, refer [here](./../../../installation/operator/#update-csi-drivers). diff --git a/content/v3/deployment/_index.md b/content/v3/deployment/_index.md index 75eee043ba..23b93beb33 100644 --- a/content/v3/deployment/_index.md +++ b/content/v3/deployment/_index.md @@ -6,8 +6,8 @@ weight: 1 --- The Container Storage Modules and the required CSI Drivers can each be deployed following the links below: -- [Dell EMC CSI Drivers Installation](../csidriver/installation) -- [Dell EMC Container Storage Module for Observability](../observability/deployment) -- [Dell EMC Container Storage Module for Authorization](../authorization/deployment) -- [Dell EMC Container Storage Module for Resiliency](../resiliency/deployment) -- [Dell EMC Container Storage Module for Replication](../replication/deployment) \ No newline at end of file +- [Dell CSI Drivers Installation](../csidriver/installation) +- [Dell Container Storage Module for Observability](../observability/deployment) +- [Dell Container Storage Module for Authorization](../authorization/deployment) +- [Dell Container Storage Module for Resiliency](../resiliency/deployment) +- [Dell Container Storage Module for Replication](../replication/deployment) \ No newline at end of file diff --git a/content/v3/deployment/csminstaller/_index.md b/content/v3/deployment/csminstaller/_index.md index f7b0e7f6d3..95ae36a236 100644 --- a/content/v3/deployment/csminstaller/_index.md +++ b/content/v3/deployment/csminstaller/_index.md @@ -5,9 +5,15 @@ description: Container Storage Modules Installer weight: 1 --- -The CSM (Container Storage Modules) Installer simplifies the deployment and management of Dell EMC Container Storage Modules and CSI Drivers to provide persistent storage for your containerized workloads. +{{% pageinfo color="primary" %}} +The CSM Installer is currently deprecated and will no longer be supported as of CSM v1.4.0 +{{% /pageinfo %}} -## CSM Installer Supported Modules and Dell EMC CSI Drivers +>>**Note: The CSM Installer only supports installation of CSM 1.0 Modules and CSI Drivers in environments that do not have any existing deployments of CSM or CSI Drivers. The CSM Installer does not support the upgrade of existing CSM or CSI Driver deployments.** + +The CSM (Container Storage Modules) Installer simplifies the deployment and management of Dell Container Storage Modules and CSI Drivers to provide persistent storage for your containerized workloads. + +## CSM Installer Supported Modules and Dell CSI Drivers | Modules/Drivers | CSM 1.0 | | - | :-: | @@ -21,8 +27,6 @@ The CSM (Container Storage Modules) Installer simplifies the deployment and mana | CSI Driver for PowerFlex | v2.0 | | CSI Driver for PowerMax | v2.0 | -**Note:** The CSM Installer supports installation of CSM 1.0 Modules and CSI Drivers in environments that do not have any existing deployments of CSM or CSI Drivers. The CSM Installer does not support the upgrade of existing CSM or CSI Driver deployments. - The CSM Installer must first be deployed in a Kubernetes environment using Helm. After which, the CSM Installer can be used through the following interfaces: - [CSM CLI](./csmcli) - [REST API](./csmapi) @@ -150,7 +154,7 @@ helm install -n csm-installer --create-namespace \ When a new version of the CSM Installer helm chart is available, the following steps can be used to upgrade to the latest version. ->Note: Upgrading the CSM Installer does not upgrade the Dell EMC CSI Drivers or modules that were previously deployed with the installer. The CSM Installer does not support upgrading of the Dell EMC CSI Drivers or modules. The Dell EMC CSI Drivers and modules must be deleted and re-deployed using the latest CSM Installer in order to get the most recent version of the Dell EMC CSI Driver and modules. +>Note: Upgrading the CSM Installer does not upgrade the Dell CSI Drivers or modules that were previously deployed with the installer. The CSM Installer does not support upgrading of the Dell CSI Drivers or modules. The Dell CSI Drivers and modules must be deleted and re-deployed using the latest CSM Installer in order to get the most recent version of the Dell CSI Driver and modules. 1. Update the helm repository. ``` @@ -186,4 +190,4 @@ helm upgrade -n csm-installer \ 1. Delete the Helm chart ``` helm delete -n csm-installer csm-installer -``` \ No newline at end of file +``` diff --git a/content/v3/deployment/csminstaller/csmcli.md b/content/v3/deployment/csminstaller/csmcli.md index 7e21ffcb1c..3711351969 100644 --- a/content/v3/deployment/csminstaller/csmcli.md +++ b/content/v3/deployment/csminstaller/csmcli.md @@ -3,9 +3,9 @@ title : CSM CLI linktitle: CSM CLI weight: 2 description: > - Dell EMC Container Storage Modules (CSM) Command Line Interface(CLI) Deployment and Management + Dell Container Storage Modules (CSM) Command Line Interface(CLI) Deployment and Management --- -`csm` is a command-line client for installation of Dell EMC Container Storage Modules and CSI Drivers for Kubernetes clusters. +`csm` is a command-line client for installation of Dell Container Storage Modules and CSI Drivers for Kubernetes clusters. ## Pre-requisites @@ -83,7 +83,7 @@ To change password follow below command ### View Supported Platforms -You can now view the supported Dell emcCSI Drivers +You can now view the supported DellCSI Drivers ```console ./csm get supported-drivers @@ -192,9 +192,9 @@ See the individual steps for configuaration file pre-requisites for CSM Observab
- CSI Driver for Dell EMC PowerMax with reverse proxy module + CSI Driver for Dell PowerMax with reverse proxy module - To deploy CSI Driver for Dell EMC PowerMax with reverse proxy module, first upload reverse proxy tls crt and tls key via [adding configuration file](#upload-configuration-files). Then, use the below command to create application: + To deploy CSI Driver for Dell PowerMax with reverse proxy module, first upload reverse proxy tls crt and tls key via [adding configuration file](#upload-configuration-files). Then, use the below command to create application: ```console ./csm create application --clustername \ @@ -208,7 +208,7 @@ See the individual steps for configuaration file pre-requisites for CSM Observab
CSI Driver with replication module - To deploy CSI driver with replication module, first add a target cluster through [adding cluster](#add-a-cluster). Then, use the below command(this command is an example to deploy CSI Driver for Dell EMC PowerStore with replication module) to create application:: + To deploy CSI driver with replication module, first add a target cluster through [adding cluster](#add-a-cluster). Then, use the below command(this command is an example to deploy CSI Driver for Dell PowerStore with replication module) to create application:: ```console ./csm create application --clustername \ diff --git a/content/v3/deployment/csmoperator/_index.md b/content/v3/deployment/csmoperator/_index.md new file mode 100644 index 0000000000..702fab7871 --- /dev/null +++ b/content/v3/deployment/csmoperator/_index.md @@ -0,0 +1,209 @@ +--- +title: "CSM Operator" +linkTitle: "CSM Operator" +description: Container Storage Modules Operator +weight: 1 +--- + +{{% pageinfo color="primary" %}} +The Dell CSM Operator is currently in tech-preview and is not supported in production environments. It can be used in environments where no other Dell CSI Drivers or CSM Modules are installed. +{{% /pageinfo %}} + +The Dell CSM Operator is a Kubernetes Operator, which can be used to install and manage the CSI Drivers and CSM Modules provided by Dell for various storage platforms. This operator is available as a community operator for upstream Kubernetes and can be deployed using OperatorHub.io. The operator can be installed using OLM (Operator Lifecycle Manager) or manually. + +## Supported Platforms +Dell CSM Operator has been tested and qualified on Upstream Kubernetes and OpenShift. Supported versions are listed below. + +| Kubernetes Version | OpenShift Version | +| -------------------- | ------------------- | +| 1.21, 1.22, 1.23 | 4.8, 4.9 | + +## Supported CSI Drivers + +| CSI Driver | Version | ConfigVersion | +| ------------------ | --------- | -------------- | +| CSI PowerScale | 2.2.0 | v2.2.0 | + +## Supported CSM Modules + +| CSM Modules | Version | ConfigVersion | +| ------------------ | --------- | -------------- | +| CSM Authorization | 1.2.0 | v1.2.0 | + +## Installation +Dell CSM Operator can be installed manually or via Operator Hub. + +### Manual Installation + +#### Operator Installation on a cluster without OLM + +1. Clone the [Dell CSM Operator repository](https://github.com/dell/csm-operator). +2. `cd csm-operator` +3. (Optional) If using a local Docker image, edit the `deploy/operator.yaml` file and set the image name for the CSM Operator Deployment. +4. Run `bash scripts/install.sh` to install the operator. + +>NOTE: Dell CSM Operator will be installed in the `dell-csm-operator` namespace. + +{{< imgproc install.jpg Resize "2500x" >}}{{< /imgproc >}} + +5. Run the command `kubectl get pods -n dell-csm-operator` to validate the installation. If installed successfully, you should be able to see the operator pod in the `dell-csm-operator` namespace. + +{{< imgproc install_pods.jpg Resize "2500x" >}}{{< /imgproc >}} + +#### Operator Installation on a cluster with OLM +1. Clone the [Dell CSM Operator repository](https://github.com/dell/csm-operator). +2. `cd csm-operator` +3. Run `bash scripts/install_olm.sh` to install the operator. +>NOTE: Dell CSM Operator will get installed in the `test-csm-operator-olm` namespace. + +{{< imgproc install_olm.jpg Resize "2500x" >}}{{< /imgproc >}} + +4. Once installation completes, run the command `kubectl get pods -n test-csm-operator-olm` to validate the installation. If installed successfully, you should be able to see the operator pods and CSV in the `test-csm-operator-olm` namespace. The CSV phase will be in `Succeeded` state. + +{{< imgproc install_olm_pods.jpg Resize "2500x" >}}{{< /imgproc >}} + +>**NOTE**: The recommended version of OLM for upstream Kubernetes is **`v0.18.2`**. + +### Installation via Operator Hub +`dell-csm-operator` can be installed via Operator Hub on upstream Kubernetes clusters & Red Hat OpenShift Clusters. + +The installation process involves the creation of a `Subscription` object either via the _OperatorHub_ UI or using `kubectl/oc`. While creating the `Subscription` you can set the Approval strategy for the `InstallPlan` for the operator to: +* _Automatic_ - If you want the operator to be automatically installed or upgraded (once an upgrade is available). +* _Manual_ - If you want a cluster administrator to manually review and approve the `InstallPlan` for installation/upgrades. + +### Uninstall +#### Operator uninstallation on a cluster without OLM +To uninstall a CSM operator, run `bash scripts/uninstall.sh`. This will uninstall the operator in `dell-csm-operator` namespace. + +{{< imgproc uninstall.jpg Resize "2500x" >}}{{< /imgproc >}} + +#### Operator uninstallation on a cluster with OLM +To uninstall a CSM operator installed with OLM run `bash scripts/uninstall_olm.sh`. This will uninstall the operator in `test-csm-operator-olm` namespace. + +{{< imgproc uninstall_olm.jpg Resize "2500x" >}}{{< /imgproc >}} + +### Custom Resource Definitions +As part of the Dell CSM Operator installation, a CRD representing configuration for the CSI Driver and CSM Modules is also installed. +`containerstoragemodule` CRD is installed in API Group `storage.dell.com`. + +Drivers and modules can be installed by creating a `customResource`. + +### Custom Resource Specification +Each CSI Driver and CSM Module installation is represented by a Custom Resource. + +The specification for the Custom Resource is the same for all the drivers.Below is a list of all the mandatory and optional fields in the Custom Resource specification + +#### Mandatory fields + +**configVersion** - Configuration version - refer [here](#full-list-of-csi-drivers-and-versions-supported-by-the-dell-csm-operator) for appropriate config version. + +**replicas** - Number of replicas for controller plugin - must be set to 1 for all drivers. + +**dnsPolicy** - Determines the dnsPolicy for the node daemonset. Accepted values are `Default`, `ClusterFirst`, `ClusterFirstWithHostNet`, `None`. + +**common** - This field is mandatory and is used to specify common properties for both controller and the node plugin. + +* image - driver container image +* imagePullPolicy - Image Pull Policy of the driver image +* envs - List of environment variables and their values + +#### Optional fields + +**controller** - List of environment variables and values which are applicable only for controller. + +**node** - List of environment variables and values which are applicable only for node. + +**sideCars** - Specification for CSI sidecar containers. + +**authSecret** - Name of the secret holding credentials for use by the driver. If not specified, the default secret *-creds must exist in the same namespace as driver. + +**tlsCertSecret** - Name of the TLS cert secret for use by the driver. If not specified, a secret *-certs must exist in the namespace as driver. + +**tolerations** - List of tolerations which should be applied to the driver StatefulSet/Deployment and DaemonSet. It should be set separately in the controller and node sections if you want separate set of tolerations for them. + +**nodeSelector** - Used to specify node selectors for the driver StatefulSet/Deployment and DaemonSet. + +>**Note:** The `image` field should point to the correct image tag for version of the driver you are installing. + +### Pre-requisites for installation of the CSI Drivers + +On Upstream Kubernetes clusters, make sure to install +* VolumeSnapshot CRDs - Install v1 VolumeSnapshot CRDs +* External Volume Snapshot Controller + +#### Volume Snapshot CRD's +The Kubernetes Volume Snapshot CRDs can be obtained and installed from the external-snapshotter project on Github. Manifests are available [here](https://github.com/kubernetes-csi/external-snapshotter/tree/v5.0.1/client/config/crd) + +#### Volume Snapshot Controller +The CSI external-snapshotter sidecar is split into two controllers: +- A common snapshot controller +- A CSI external-snapshotter sidecar + +The common snapshot controller must be installed only once in the cluster irrespective of the number of CSI drivers installed in the cluster. On OpenShift clusters, the common snapshot-controller is pre-installed. In the clusters where it is not present, it can be installed using `kubectl` and the manifests are available [here](https://github.com/kubernetes-csi/external-snapshotter/tree/v5.0.1/deploy/kubernetes/snapshot-controller) + +*NOTE:* +- The manifests available on GitHub install the snapshotter image: + - [quay.io/k8scsi/csi-snapshotter:v5.0.1](https://quay.io/repository/k8scsi/csi-snapshotter?tag=v5.0.1&tab=tags) +- The CSI external-snapshotter sidecar is still installed along with the driver and does not involve any extra configuration. + +#### Installation example + +You can install CRDs and the default snapshot controller by running the following commands: +```bash +git clone https://github.com/kubernetes-csi/external-snapshotter/ +cd ./external-snapshotter +git checkout release- +kubectl create -f client/config/crd +kubectl create -f deploy/kubernetes/snapshot-controller +``` +*NOTE:* +- It is recommended to use 5.0.x version of snapshotter/snapshot-controller. + +## Installing CSI Driver via Operator + +Refer [PowerScale Driver](drivers/powerscale) to install the driver via Operator + +>**Note**: If you are using an OLM based installation, example manifests are available in `OperatorHub` UI. +You can edit these manifests and install the driver using the `OperatorHub` UI. + +### Verifying the driver installation +Once the driver `Custom Resource (CR)` is created, you can verify the installation as mentioned below + +* Check if ContainerStorageModule CR is created successfully using the command below: + ``` + $ kubectl get csm/ -n -o yaml + ``` +* Check the status of the CR to verify if the driver installation is in the `Succeeded` state. If the status is not `Succeeded`, see the [Troubleshooting guide](./troubleshooting/#my-dell-csi-driver-install-failed-how-do-i-fix-it) for more information. + + +### Update CSI Drivers +The CSI Drivers and CSM Modules installed by the Dell CSM Operator can be updated like any Kubernetes resource. This can be achieved in various ways which include: + +* Modifying the installation directly via `kubectl edit` + For e.g. - If the name of the installed PowerScale driver is powerscale, then run + ``` + # Replace driver-namespace with the namespace where the PowerScale driver is installed + $ kubectl edit csm/powerscale -n + ``` + and modify the installation +* Modify the API object in-place via `kubectl patch` + +#### Supported modifications +* Changing environment variable values for driver +* Updating the image of the driver + +### Uninstall CSI Driver +The CSI Drivers and CSM Modules can be uninstalled by deleting the Custom Resource. + +For e.g. +``` +$ kubectl delete csm/powerscale -n +``` + +By default, the `forceRemoveDriver` option is set to `true` which will uninstall the CSI Driver and CSM Modules when the Custom Resource is deleted. Setting this option to `false` is not recommended. + +### SideCars +Although the sidecars field in the driver specification is optional, it is **strongly** recommended to not modify any details related to sidecars provided (if present) in the sample manifests. The only exception to this is modifications requested by the documentation, for example, filling in blank IPs or other such system-specific data. Any modifications not specifically requested by the documentation should be only done after consulting with Dell support. + +## Modules +The CSM Operator can optionally enable modules that are supported by the specific Dell CSI driver. By default, the modules are disabled but they can be enabled by setting the `enabled` flag to true and setting any other configuration options for the given module. diff --git a/content/v3/deployment/csmoperator/drivers/_index.md b/content/v3/deployment/csmoperator/drivers/_index.md new file mode 100644 index 0000000000..c850691c0d --- /dev/null +++ b/content/v3/deployment/csmoperator/drivers/_index.md @@ -0,0 +1,6 @@ +--- +title: "CSI Drivers" +linkTitle: "CSI Drivers" +description: Installation of Dell CSI Drivers using Dell CSM Operator +weight: 1 +--- diff --git a/content/v3/deployment/csmoperator/drivers/powerscale.md b/content/v3/deployment/csmoperator/drivers/powerscale.md new file mode 100644 index 0000000000..951ece9dd0 --- /dev/null +++ b/content/v3/deployment/csmoperator/drivers/powerscale.md @@ -0,0 +1,139 @@ +--- +title: PowerScale +linkTitle: "PowerScale" +description: > + Installing Dell CSI Driver for PowerScale via Dell CSM Operator +--- + +## Installing CSI Driver for PowerScale via Dell CSM Operator + +The CSI Driver for Dell PowerScale can be installed via the Dell CSM Operator. +To deploy the Operator, follow the instructions available [here](../../#installation). + +Note that the deployment of the driver using the operator does not use any Helm charts and the installation and configuration parameters will be slightly different from the one specified via the Helm installer. + +**Note**: MKE (Mirantis Kubernetes Engine) does not support the installation of CSI-PowerScale via Operator. + +### Listing installed drivers with the ContainerStorageModule CRD +User can query for all Dell CSI drivers using the following command: +`kubectl get csm --all-namespaces` + +### Install Driver + +1. Create namespace. + Execute `kubectl create namespace test-isilon` to create the test-isilon namespace (if not already present). Note that the namespace can be any user-defined name, in this example, we assume that the namespace is 'test-isilon'. + +2. Create *isilon-creds* secret by creating a yaml file called secret.yaml with the following content: + ``` + isilonClusters: + # logical name of PowerScale Cluster + - clusterName: "cluster1" + + # username for connecting to PowerScale OneFS API server + # Default value: None + username: "user" + + # password for connecting to PowerScale OneFS API server + password: "password" + + # HTTPS endpoint of the PowerScale OneFS API server + # Default value: None + # Examples: "1.2.3.4", "https://1.2.3.4", "https://abc.myonefs.com" + endpoint: "1.2.3.4" + + # Is this a default cluster (would be used by storage classes without ClusterName parameter) + # Allowed values: + # true: mark this cluster config as default + # false: mark this cluster config as not default + # Default value: false + isDefault: true + + # Specify whether the PowerScale OneFS API server's certificate chain and host name should be verified. + # Allowed values: + # true: skip OneFS API server's certificate verification + # false: verify OneFS API server's certificates + # Default value: default value specified in values.yaml + # skipCertificateValidation: true + + # The base path for the volumes to be created on PowerScale cluster + # This will be used if a storage class does not have the IsiPath parameter specified. + # Ensure that this path exists on PowerScale cluster. + # Allowed values: unix absolute path + # Default value: default value specified in values.yaml + # Examples: "/ifs/data/csi", "/ifs/engineering" + # isiPath: "/ifs/data/csi" + + # The permissions for isi volume directory path + # This will be used if a storage class does not have the IsiVolumePathPermissions parameter specified. + # Allowed values: valid octal mode number + # Default value: "0777" + # Examples: "0777", "777", "0755" + # isiVolumePathPermissions: "0777" + + - clusterName: "cluster2" + username: "user" + password: "password" + endpoint: "1.2.3.4" + endpointPort: "8080" + ``` + + Replace the values for the given keys as per your environment. After creating the secret.yaml, the following command can be used to create the secret, + `kubectl create secret generic isilon-creds -n isilon --from-file=config=secret.yaml` + + Use the following command to replace or update the secret + + `kubectl create secret generic isilon-creds -n isilon --from-file=config=secret.yaml -o yaml --dry-run | kubectl replace -f -` + + **Note**: The user needs to validate the YAML syntax and array related key/values while replacing the isilon-creds secret. + The driver will continue to use previous values in case of an error found in the YAML file. + +3. Create isilon-certs-n secret. + Please refer [this section](../../../../csidriver/installation/helm/isilon/#certificate-validation-for-onefs-rest-api-calls) for creating cert-secrets. + + If certificate validation is skipped, empty secret must be created. To create an empty secret. Ex: empty-secret.yaml + + ```yaml + apiVersion: v1 + kind: Secret + metadata: + name: isilon-certs-0 + namespace: isilon + type: Opaque + data: + cert-0: "" + ``` + Execute command: ```kubectl create -f empty-secret.yaml``` + +4. Create a CR (Custom Resource) for PowerScale using the sample files provided + [here](https://github.com/dell/csm-operator/tree/master/samples). This file can be modified to use custom parameters if needed. + +5. Users should configure the parameters in CR. The following table lists the primary configurable parameters of the PowerScale driver and their default values: + + | Parameter | Description | Required | Default | + | --------- | ----------- | -------- |-------- | + | dnsPolicy | Determines the DNS Policy of the Node service | Yes | ClusterFirstWithHostNet | + | ***Common parameters for node and controller*** | + | CSI_ENDPOINT | The UNIX socket address for handling gRPC calls | No | /var/run/csi/csi.sock | + | X_CSI_ISI_SKIP_CERTIFICATE_VALIDATION | Specifies whether SSL security needs to be enabled for communication between PowerScale and CSI Driver | No | true | + | X_CSI_ISI_PATH | Base path for the volumes to be created | Yes | | + | X_CSI_ALLOWED_NETWORKS | Custom networks for PowerScale export. List of networks that can be used for NFS I/O traffic, CIDR format should be used | No | empty | + | X_CSI_ISI_AUTOPROBE | To enable auto probing for driver | No | true | + | X_CSI_ISI_NO_PROBE_ON_START | Indicates whether the controller/node should probe during initialization | Yes | | + | X_CSI_ISI_VOLUME_PATH_PERMISSIONS | The permissions for isi volume directory path | Yes | 0777 | + | ***Controller parameters*** | + | X_CSI_MODE | Driver starting mode | No | controller | + | X_CSI_ISI_ACCESS_ZONE | Name of the access zone a volume can be created in | No | System | + | X_CSI_ISI_QUOTA_ENABLED | To enable SmartQuotas | Yes | | + | ***Node parameters*** | + | X_CSI_MAX_VOLUMES_PER_NODE | Specify the default value for the maximum number of volumes that the controller can publish to the node | Yes | 0 | + | X_CSI_MODE | Driver starting mode | No | node | + +6. Execute the following command to create PowerScale custom resource: + ```kubectl create -f ``` . + This command will deploy the CSI-PowerScale driver in the namespace specified in the input YAML file. + +7. [Verify the CSI Driver installation](../../#verifying-the-driver-installation) + +**Note** : + 1. "Kubelet config dir path" is not yet configurable in case of Operator based driver installation. + 2. Also, snapshotter and resizer sidecars are not optional to choose, it comes default with Driver installation. \ No newline at end of file diff --git a/content/v3/deployment/csmoperator/install.jpg b/content/v3/deployment/csmoperator/install.jpg new file mode 100644 index 0000000000..14b6362c45 Binary files /dev/null and b/content/v3/deployment/csmoperator/install.jpg differ diff --git a/content/v3/deployment/csmoperator/install_olm.jpg b/content/v3/deployment/csmoperator/install_olm.jpg new file mode 100644 index 0000000000..977acb9063 Binary files /dev/null and b/content/v3/deployment/csmoperator/install_olm.jpg differ diff --git a/content/v3/deployment/csmoperator/install_olm_pods.jpg b/content/v3/deployment/csmoperator/install_olm_pods.jpg new file mode 100644 index 0000000000..fff68a99e0 Binary files /dev/null and b/content/v3/deployment/csmoperator/install_olm_pods.jpg differ diff --git a/content/v3/deployment/csmoperator/install_pods.jpg b/content/v3/deployment/csmoperator/install_pods.jpg new file mode 100644 index 0000000000..174dd64d9b Binary files /dev/null and b/content/v3/deployment/csmoperator/install_pods.jpg differ diff --git a/content/v3/deployment/csmoperator/modules/_index.md b/content/v3/deployment/csmoperator/modules/_index.md new file mode 100644 index 0000000000..4b79544a51 --- /dev/null +++ b/content/v3/deployment/csmoperator/modules/_index.md @@ -0,0 +1,6 @@ +--- +title: "CSM Modules" +linkTitle: "CSM Modules" +description: Installation of Dell CSM Modules using Dell CSM Operator +weight: 2 +--- \ No newline at end of file diff --git a/content/v3/deployment/csmoperator/modules/authorization.md b/content/v3/deployment/csmoperator/modules/authorization.md new file mode 100644 index 0000000000..3e9307bab8 --- /dev/null +++ b/content/v3/deployment/csmoperator/modules/authorization.md @@ -0,0 +1,20 @@ +--- +title: Authorization +linkTitle: "Authorization" +description: > + Installing Authorization via Dell CSM Operator +--- + +## Installing Authorization via Dell CSM Operator + +The Authorization module for supported Dell CSI Drivers can be installed via the Dell CSM Operator. + +To deploy the Dell CSM Operator, follow the instructions available [here](../../#installation). + +There are [sample manifests](https://github.com/dell/csm-operator/blob/main/samples/storage_csm_powerscale.yaml) provided which can be edited to do an easy installation of the driver along with the module. + +### Install Authorization + +1. Create the required Secrets as documented in the [Helm chart procedure](../../../../authorization/deployment/#configuring-a-dell-csi-driver). + +2. Follow the instructions available [here](../../drivers/powerscale/#install-driver) to install the Dell CSI Driver via the CSM Operator. The module section in the ContainerStorageModule CR should be updated to enable Authorization. \ No newline at end of file diff --git a/content/v3/deployment/csmoperator/troubleshooting/_index.md b/content/v3/deployment/csmoperator/troubleshooting/_index.md new file mode 100644 index 0000000000..0af6303619 --- /dev/null +++ b/content/v3/deployment/csmoperator/troubleshooting/_index.md @@ -0,0 +1,60 @@ +--- +title: "Troubleshooting" +linkTitle: "Troubleshooting" +Description: > + Troubleshooting guide for Dell CSM Operator +weight: 3 +--- + + - [Can CSM Operator manage existing drivers installed using Helm charts or the Dell CSI Operator?](#can-csm-operator-manage-existing-drivers-installed-using-helm-charts-or-the-dell-csi-operator) + - [Why does some of the Custom Resource fields show up as invalid or unsupported in the OperatorHub GUI?](#why-does-some-of-the-custom-resource-fields-show-up-as-invalid-or-unsupported-in-the-operatorhub-gui) + - [How can I view detailed logs for the CSM Operator?](#how-can-i-view-detailed-logs-for-the-csm-operator) + - [My Dell CSI Driver install failed. How do I fix it?](#my-dell-csi-driver-install-failed-how-do-i-fix-it) + +### Can CSM Operator manage existing drivers installed using Helm charts or the Dell CSI Operator? +The Dell CSM Operator is unable to manage any existing driver installed using Helm charts or the Dell CSI Operator. If you already have installed one of the Dell CSI driver in your cluster and want to use the CSM operator based deployment, uninstall the driver and then redeploy the driver via Dell CSM Operator + + +### Why does some of the Custom Resource fields show up as invalid or unsupported in the OperatorHub GUI? +The Dell CSM Operator is not fully compliant with the OperatorHub React UI elements.Due to this, some of the Custom Resource fields may show up as invalid or unsupported in the OperatorHub GUI. To get around this problem, use `kubectl/oc` commands to get details about the Custom Resource(CR). This issue will be fixed in the upcoming releases of the Dell CSM Operator. + +### How can I view detailed logs for the CSM Operator? +Detailed logs of the CSM Operator can be displayed using the following command: +``` +kubectl logs -n +``` + +### My Dell CSI Driver install failed. How do I fix it? +Describe the current state by issuing: +`kubectl describe csm -n ` + +In the output refer to the status and events section. If status shows pods that are in the failed state, refer to the CSI Driver Troubleshooting guide. + +Example: +``` +Status: + Controller Status: + Available: 0 + Desired: 2 + Failed: 2 + Node Status: + Available: 0 + Desired: 2 + Failed: 2 + State: Failed + +Events + Warning Updated 67s (x15 over 2m4s) csm (combined from similar events): at 1646848059520359167 Pod error details ControllerError: ErrImagePull= pull access denied for dellem/csi-isilon, repository does not exist or may require 'docker login': denied: requested access to the resource is denied, Daemonseterror: ErrImagePull= pull access denied for dellem/csi-isilon, repository does not exist or may require 'docker login': denied: requested access to the resource is denied +``` + +The above event shows dellem/csi-isilon does not exist, to resolve this user can kubectl edit the csm and update to correct image. + + +To get details of driver installation: `kubectl logs -n dell-csm-operator`. + +Typical reasons for errors: +* Incorrect driver version +* Incorrect driver type +* Incorrect driver Spec env, args for containers +* Incorrect RBAC permissions + diff --git a/content/v3/deployment/csmoperator/uninstall.JPG b/content/v3/deployment/csmoperator/uninstall.JPG new file mode 100644 index 0000000000..96aba500e9 Binary files /dev/null and b/content/v3/deployment/csmoperator/uninstall.JPG differ diff --git a/content/v3/deployment/csmoperator/uninstall_olm.JPG b/content/v3/deployment/csmoperator/uninstall_olm.JPG new file mode 100644 index 0000000000..dcf78dba4e Binary files /dev/null and b/content/v3/deployment/csmoperator/uninstall_olm.JPG differ diff --git a/content/v3/grasp/video.md b/content/v3/grasp/video.md index 618fd7ca22..19408b2f6b 100644 --- a/content/v3/grasp/video.md +++ b/content/v3/grasp/video.md @@ -3,8 +3,8 @@ title: Quick video lessons Description: Short videos to quickly grasp the concepts --- -## Getting started with Kubernetes on Dell EMC Storage +## Getting started with Kubernetes on Dell Storage {{< youtube id="vjuLhau5vBY" >}} -## Dell EMC CSI Operator deployment in OpenShift +## Dell CSI Operator deployment in OpenShift {{< youtube id="l4z2tRqHnSg" >}} diff --git a/content/v3/observability/_index.md b/content/v3/observability/_index.md index 2ef8f3f6da..6b3ff27be8 100644 --- a/content/v3/observability/_index.md +++ b/content/v3/observability/_index.md @@ -3,25 +3,25 @@ title: "Observability" linkTitle: "Observability" weight: 5 Description: > - Dell EMC Container Storage Modules (CSM) for Observability + Dell Container Storage Modules (CSM) for Observability --- - [Container Storage Modules](https://github.com/dell/csm) (CSM) for Observability is part of the open-source suite of Kubernetes storage enablers for Dell EMC products. + [Container Storage Modules](https://github.com/dell/csm) (CSM) for Observability is part of the open-source suite of Kubernetes storage enablers for Dell products. - It is an OpenTelemetry agent that collects array-level metrics for Dell EMC storage so they can be scraped into a Prometheus database. With CSM for Observability, you will gain visibility not only on the capacity of the volumes/file shares you manage with Dell CSM CSI (Container Storage Interface) drivers but also their performance in terms of bandwidth, IOPS, and response time. + It is an OpenTelemetry agent that collects array-level metrics for Dell storage so they can be scraped into a Prometheus database. With CSM for Observability, you will gain visibility not only on the capacity of the volumes/file shares you manage with Dell CSM CSI (Container Storage Interface) drivers but also their performance in terms of bandwidth, IOPS, and response time. Thanks to pre-packaged Grafana dashboards, you will be able to go through these metrics history and see the topology between a Kubernetes PV (Persistent Volume) and its translation as a LUN or file share in the backend array. This module also allows Kubernetes admins to collect array level metrics to check the overall capacity and performance directly from the Prometheus/Grafana tools rather than interfacing directly with the storage system itself. Metrics data is collected and pushed to the [OpenTelemetry Collector](https://github.com/open-telemetry/opentelemetry-collector), so it can be processed, and exported in a format consumable by Prometheus. SSL certificates for TLS between nodes are handled by [cert-manager](https://github.com/jetstack/cert-manager). -CSM for Observability is composed of several services, each living in its own GitHub repository. Contributions can be made to this repository or any of the CSM for Observability repositories listed below. +CSM for Observability is composed of several services, each living in its own GitHub repository, that can be installed following one of the three deployments we support [here](deployment). Contributions can be made to this repository or any of the CSM for Observability repositories listed below. {{}} | Name | Repository | Description | | ---- | --------- | ----------- | -| Performance Metrics for PowerFlex | [CSM Metrics for PowerFlex](https://github.com/dell/karavi-metrics-powerflex) | Performance Metrics for PowerFlex captures telemetry data about Kubernetes storage usage and performance obtained through the CSI (Container Storage Interface) Driver for Dell EMC PowerFlex. The metrics service pushes it to the OpenTelemetry Collector, so it can be processed, and exported in a format consumable by Prometheus. Prometheus can then be configured to scrape the OpenTelemetry Collector exporter endpoint to provide metrics so they can be visualized in Grafana. Please visit the repository for more information. | -| Performance Metrics for PowerStore | [CSM Metrics for PowerStore](https://github.com/dell/csm-metrics-powerstore) | Performance Metrics for PowerStore captures telemetry data about Kubernetes storage usage and performance obtained through the CSI (Container Storage Interface) Driver for Dell EMC PowerStore. The metrics service pushes it to the OpenTelemetry Collector, so it can be processed, and exported in a format consumable by Prometheus. Prometheus can then be configured to scrape the OpenTelemetry Collector exporter endpoint to provide metrics so they can be visualized in Grafana. Please visit the repository for more information. | -| Volume Topology | [CSM Topology](https://github.com/dell/karavi-topology) | Topology provides Kubernetes administrators with the topology data related to containerized storage that is provisioned by a CSI (Container Storage Interface) Driver for Dell EMC storage products. Please visit the repository for more information. | +| Performance Metrics for PowerFlex | [CSM Metrics for PowerFlex](https://github.com/dell/karavi-metrics-powerflex) | Performance Metrics for PowerFlex captures telemetry data about Kubernetes storage usage and performance obtained through the CSI (Container Storage Interface) Driver for Dell PowerFlex. The metrics service pushes it to the OpenTelemetry Collector, so it can be processed, and exported in a format consumable by Prometheus. Prometheus can then be configured to scrape the OpenTelemetry Collector exporter endpoint to provide metrics so they can be visualized in Grafana. Please visit the repository for more information. | +| Performance Metrics for PowerStore | [CSM Metrics for PowerStore](https://github.com/dell/csm-metrics-powerstore) | Performance Metrics for PowerStore captures telemetry data about Kubernetes storage usage and performance obtained through the CSI (Container Storage Interface) Driver for Dell PowerStore. The metrics service pushes it to the OpenTelemetry Collector, so it can be processed, and exported in a format consumable by Prometheus. Prometheus can then be configured to scrape the OpenTelemetry Collector exporter endpoint to provide metrics so they can be visualized in Grafana. Please visit the repository for more information. | +| Volume Topology | [CSM Topology](https://github.com/dell/karavi-topology) | Topology provides Kubernetes administrators with the topology data related to containerized storage that is provisioned by a CSI (Container Storage Interface) Driver for Dell storage products. The Topology service is enabled by default as part of the CSM for Observability Helm Chart [values file](https://github.com/dell/helm-charts/blob/main/charts/karavi-observability/values.yaml). Please visit the repository for more information. | {{
}} ## CSM for Observability Capabilities @@ -46,7 +46,7 @@ CSM for Observability provides the following capabilities: {{}} | COP/OS | Supported Versions | |-|-| -| Kubernetes | 1.20, 1.21, 1.22 | +| Kubernetes | 1.21, 1.22, 1.23 | | Red Hat OpenShift | 4.8, 4.9 | | Rancher Kubernetes Engine | yes | | RHEL | 7.x, 8.x | @@ -58,7 +58,7 @@ CSM for Observability provides the following capabilities: {{
}} | | PowerFlex | PowerStore | |---------------|:-------------------:|:----------------:| -| Storage Array | 3.5.x, 3.6.x | 1.0.x, 2.0.x | +| Storage Array | 3.5.x, 3.6.x | 1.0.x, 2.0.x, 2.1.x | {{
}} ## Supported CSI Drivers @@ -67,8 +67,8 @@ CSM for Observability supports the following CSI drivers and versions. {{}} | Storage Array | CSI Driver | Supported Versions | | ------------- | ---------- | ------------------ | -| CSI Driver for Dell EMC PowerFlex | [csi-powerflex](https://github.com/dell/csi-powerflex) | v2.0,v2.1 | -| CSI Driver for Dell EMC PowerStore | [csi-powerstore](https://github.com/dell/csi-powerstore) | v2.0,v2.1 | +| CSI Driver for Dell PowerFlex | [csi-powerflex](https://github.com/dell/csi-powerflex) | v2.0, v2.1, v2.2 | +| CSI Driver for Dell PowerStore | [csi-powerstore](https://github.com/dell/csi-powerstore) | v2.0, v2.1, v2.2 | {{
}} ## Topology Data diff --git a/content/v3/observability/deployment/_index.md b/content/v3/observability/deployment/_index.md index b446d71974..582e8d90c0 100644 --- a/content/v3/observability/deployment/_index.md +++ b/content/v3/observability/deployment/_index.md @@ -3,35 +3,30 @@ title: Deployment linktitle: Deployment weight: 3 description: > - Dell EMC Container Storage Modules (CSM) for Observability Deployment + Dell Container Storage Modules (CSM) for Observability Deployment --- CSM for Observability can be deployed in one of three ways: -- [CSM Installer](../../deployment) (*Recommended installation method*) - [Helm](./helm) - [CSM for Observability Installer](./online) - [CSM for Observability Offline Installer](./offline) -## Prerequisites - -- Helm 3.3 -- The deployment of one or more [supported](../#supported-csi-drivers) Dell EMC CSI drivers - ## Post Installation Dependencies The following third-party components are required in the same Kubernetes cluster where CSM for Observability has been deployed: * [Prometheus](#prometheus) * [Grafana](#grafana) +* [Other Deployment Methods](#other-deployment-methods) -These components must be deployed according to the specifications defined below. +There are various ways to deploy these components. We recommend following the Helm deployments according to the specifications defined below. **Tip**: CSM for Observability must be deployed first. Once the module has been deployed, you can proceed to deploying/configuring Prometheus and Grafana. ### Prometheus -The Prometheus service should be running on the same Kubernetes cluster as the CSM for Observability services. As part of the CSM for Observability deployment, the OpenTelemetry Collector gets deployed. The OpenTelemetry Collector is what CSM for Observability pushes metrics so that the metrics can be consumed by Prometheus. This means that Prometheus must be configured to scrape the metrics data from the OpenTelemetry Collector. +The Prometheus service should be running on the same Kubernetes cluster as the CSM for Observability services. As part of the CSM for Observability deployment, the OpenTelemetry Collector gets deployed. CSM for Observability pushes metrics to the OpenTelemetry Collector where the metrics are consumed by Prometheus. Prometheus must be configured to scrape the metrics data from the OpenTelemetry Collector. | Supported Version | Image | Helm Chart | | ----------------- | ----------------------- | ------------------------------------------------------------ | @@ -39,7 +34,7 @@ The Prometheus service should be running on the same Kubernetes cluster as the C **Note**: It is the user's responsibility to provide persistent storage for Prometheus if they want to preserve historical data. -#### Prometheus Deployment +#### Prometheus Helm Deployment Here is a sample minimal configuration for Prometheus. Please note that the configuration below uses insecure skip verify. If you wish to properly configure TLS, you will need to provide a ca_file in the Prometheus configuration. The certificate provided as part of the CSM for Observability deployment should be signed by this same CA. For more information about Prometheus configuration, see [Prometheus configuration](https://prometheus.io/docs/prometheus/latest/configuration/configuration/#configuration). @@ -62,24 +57,22 @@ Here is a sample minimal configuration for Prometheus. Please note that the conf enabled: true image: repository: quay.io/prometheus/prometheus - tag: v2.22.0 + tag: v2.23.0 pullPolicy: IfNotPresent persistentVolume: enabled: false service: type: NodePort servicePort: 9090 - serverFiles: - prometheus.yml: - scrape_configs: - - job_name: 'karavi-metrics-powerflex' - scrape_interval: 5s - scheme: https - static_configs: - - targets: ['otel-collector:8443'] - tls_config: - insecure_skip_verify: true - ``` + extraScrapeConfigs: | + - job_name: 'karavi-metrics-powerflex' + scrape_interval: 5s + scheme: https + static_configs: + - targets: ['otel-collector:8443'] + tls_config: + insecure_skip_verify: true + ``` 2. If using Rancher, create a ServiceMonitor. @@ -117,7 +110,7 @@ Here is a sample minimal configuration for Prometheus. Please note that the conf On your terminal, run the command below: ```terminal - helm install prometheus prometheus-community/prometheus -n [CSM_NAMESPACE] --create-namespace -f prometheus-values.yaml + helm install prometheus prometheus-community/prometheus -n [CSM_NAMESPACE] -f prometheus-values.yaml ``` ### Grafana @@ -156,7 +149,7 @@ Settings for the Grafana SimpleJson data source: | With CA Cert | Enabled (If using CA certificate) | -#### Grafana Deployment +#### Grafana Helm Deployment Below are the steps to deploy a new Grafana instance into your Kubernetes cluster: @@ -192,7 +185,7 @@ Below are the steps to deploy a new Grafana instance into your Kubernetes cluste 2. Create a values file. - Create a Config file named `grafana-configmap.yaml` The file should look like this: + Create a Config file named `grafana-values.yaml` The file should look like this: ```yaml # grafana-values.yaml @@ -273,6 +266,11 @@ Below are the steps to deploy a new Grafana instance into your Kubernetes cluste helm install grafana grafana/grafana -n [CSM_NAMESPACE] -f grafana-values.yaml ``` +### Other Deployment Methods + +- [Grafana Labs Operator Deployment](https://grafana.com/docs/grafana-cloud/kubernetes/prometheus/prometheus_operator/) +- [Rancher Monitoring and Alerting Deployment](https://rancher.com/docs/rancher/v2.6/en/monitoring-alerting/) + ## Importing CSM for Observability Dashboards Once Grafana is properly configured, you can import the pre-built observability dashboards. Log into Grafana and click the + icon in the side menu. Then click Import. From here you can upload the JSON files or paste the JSON text directly into the text area. Below are the locations of the dashboards that can be imported: @@ -283,7 +281,7 @@ Once Grafana is properly configured, you can import the pre-built observability | [PowerFlex: I/O Performance by Provisioned Volume](https://github.com/dell/karavi-observability/blob/main/grafana/dashboards/powerflex/volume_io_metrics.json) | Provides visibility into the I/O performance metrics (IOPS, bandwidth, latency) by volume | | [PowerFlex: Storage Pool Consumption By CSI Driver](https://github.com/dell/karavi-observability/blob/main/grafana/dashboards/powerflex/storage_consumption.json) | Provides visibility into the total, used, and available capacity for a storage class and associated underlying storage construct. | | [PowerStore: I/O Performance by Provisioned Volume](https://github.com/dell/karavi-observability/blob/main/grafana/dashboards/powerstore/volume_io_metrics.json) | *As of Release 0.4.0:* Provides visibility into the I/O performance metrics (IOPS, bandwidth, latency) by volume | -| [CSI Driver Provisioned Volume Topology](https://github.com/dell/karavi-observability/blob/main/grafana/dashboards/topology/topology.json) | Provides visibility into Dell EMC CSI (Container Storage Interface) driver provisioned volume characteristics in Kubernetes correlated with volumes on the storage system. | +| [CSI Driver Provisioned Volume Topology](https://github.com/dell/karavi-observability/blob/main/grafana/dashboards/topology/topology.json) | Provides visibility into Dell CSI (Container Storage Interface) driver provisioned volume characteristics in Kubernetes correlated with volumes on the storage system. | ## Dynamic Configuration @@ -400,7 +398,7 @@ In this case, all storage system requests made by CSM for Observability will be ``` #### Update Storage Systems -If the list of storage systems managed by a Dell EMC CSI Driver have changed, the following steps can be performed to update CSM for Observability to reference the updated systems: +If the list of storage systems managed by a Dell CSI Driver have changed, the following steps can be performed to update CSM for Observability to reference the updated systems: 1. Delete the current `karavi-authorization-config` Secret from the CSM namespace. ```console @@ -416,26 +414,26 @@ If the list of storage systems managed by a Dell EMC CSI Driver have changed, th In this case all storage system requests made by CSM for Observability will not be routed through the Authorization module. The following must be performed: -#### CSI Driver for Dell EMC PowerFlex +#### CSI Driver for Dell PowerFlex 1. Delete the current `vxflexos-config` Secret from the CSM namespace. ```console $ kubectl delete secret vxflexos-config -n [CSM_NAMESPACE] ``` -2. Copy the `vxflexos-config` Secret from the CSI Driver for Dell EMC PowerFlex namespace to the CSM namespace. +2. Copy the `vxflexos-config` Secret from the CSI Driver for Dell PowerFlex namespace to the CSM namespace. ```console $ kubectl get secret vxflexos-config -n [CSI_DRIVER_NAMESPACE] -o yaml | sed 's/namespace: [CSI_DRIVER_NAMESPACE]/namespace: [CSM_NAMESPACE]/' | kubectl create -f - ``` -### CSI Driver for Dell EMC PowerStore +### CSI Driver for Dell PowerStore 1. Delete the current `powerstore-config` Secret from the CSM namespace. ```console $ kubectl delete secret powerstore-config -n [CSM_NAMESPACE] ``` -2. Copy the `powerstore-config` Secret from the CSI Driver for Dell EMC PowerStore namespace to the CSM namespace. +2. Copy the `powerstore-config` Secret from the CSI Driver for Dell PowerStore namespace to the CSM namespace. ```console $ kubectl get secret powerstore-config -n [CSI_DRIVER_NAMESPACE] -o yaml | sed 's/namespace: [CSI_DRIVER_NAMESPACE]/namespace: [CSM_NAMESPACE]/' | kubectl create -f - - ``` \ No newline at end of file + ``` diff --git a/content/v3/observability/deployment/helm.md b/content/v3/observability/deployment/helm.md index 46c9947d7c..6d76f8216f 100644 --- a/content/v3/observability/deployment/helm.md +++ b/content/v3/observability/deployment/helm.md @@ -3,53 +3,68 @@ title: Helm linktitle: Helm weight: 3 description: > - Dell EMC Container Storage Modules (CSM) for Observability Helm deployment + Dell Container Storage Modules (CSM) for Observability Helm deployment --- The Container Storage Modules (CSM) for Observability Helm chart bootstraps an Observability deployment on a Kubernetes cluster using the Helm package manager. ## Prerequisites -- A [supported](../../../csidriver/#features-and-capabilities) CSI Driver is deployed -- The cert-manager CustomResourceDefinition resources are created. +- Helm 3.3 +- The deployment of one or more [supported](../../#supported-csi-drivers) Dell CSI drivers - ```console - $ kubectl apply --validate=false -f https://github.com/jetstack/cert-manager/releases/download/v1.5.3/cert-manager.crds.yaml - ``` +## Install the CSM for Observability Helm Chart +**Steps** +1. Create a namespace where you want to install the module `kubectl create namespace karavi` + +2. Install cert-manager CRDs `kubectl apply --validate=false -f https://github.com/jetstack/cert-manager/releases/download/v1.6.1/cert-manager.crds.yaml` -## Copy the CSI Driver Secret +3. Add the Dell Helm Charts repo `helm repo add dell https://dell.github.io/helm-charts` -Copy the config Secret from the Dell CSI Driver namespace into the namespace where CSM for Observability is deployed. +4. Copy only the deployed CSI driver entities to the Observability namespace + #### PowerFlex -### PowerFlex + 1. Copy the config Secret from the CSI PowerFlex namespace into the CSM for Observability namespace: -```console -$ kubectl get secret vxflexos-config -n [CSI_DRIVER_NAMESPACE] -o yaml | sed 's/namespace: [CSI_DRIVER_NAMESPACE]/namespace: [CSM_NAMESPACE]/' | kubectl create -f - -``` + `kubectl get secret vxflexos-config -n [CSI_DRIVER_NAMESPACE] -o yaml | sed 's/namespace: [CSI_DRIVER_NAMESPACE]/namespace: [CSM_NAMESPACE]/' | kubectl create -f -` -__Note__: The target namespace must exist before executing this command. + If [CSM for Authorization is enabled](../../../authorization/deployment/#configuring-a-dell-emc-csi-driver-with-csm-for-authorization) for CSI PowerFlex, perform the following steps: -### PowerStore + 2. Copy the driver configuration parameters ConfigMap from the CSI PowerFlex namespace into the CSM for Observability namespace: + + `kubectl get configmap vxflexos-config-params -n [CSI_DRIVER_NAMESPACE] -o yaml | sed 's/namespace: [CSI_DRIVER_NAMESPACE]/namespace: [CSM_NAMESPACE]/' | kubectl create -f -` -```console -$ kubectl get secret powerstore-config -n [CSI_DRIVER_NAMESPACE] -o yaml | sed 's/namespace: [CSI_DRIVER_NAMESPACE]/namespace: [CSM_NAMESPACE]/' | kubectl create -f - -``` + 3. Copy the `karavi-authorization-config`, `proxy-server-root-certificate`, `proxy-authz-tokens` Secret from the CSI PowerFlex namespace into the CSM for Observability namespace: -__Note__: The target namespace must exist before executing this command. + `kubectl get secret karavi-authorization-config proxy-server-root-certificate proxy-authz-tokens -n [CSI_DRIVER_NAMESPACE] -o yaml | sed 's/namespace: [CSI_DRIVER_NAMESPACE]/namespace: [CSM_NAMESPACE]/' | kubectl create -f -` -## Add the Repo + #### PowerStore -```console -$ helm repo add dell https://dell.github.io/helm-charts -``` + 1. Copy the config Secret from the CSI PowerStore namespace into the CSM for Observability namespace: -## Installing the Chart + `kubectl get secret powerstore-config -n [CSI_DRIVER_NAMESPACE] -o yaml | sed 's/namespace: [CSI_DRIVER_NAMESPACE]/namespace: [CSM_NAMESPACE]/' | kubectl create -f -` -```console -$ helm install karavi-observability dell/karavi-observability -n [CSM_NAMESPACE] --create-namespace -``` +5. Configure the [parameters](#configuration) and install the CSM for Observability Helm Chart -The [configuration](#configuration) section below lists all the parameters that can be configured during installation + A default values.yaml file is located [here](https://github.com/dell/helm-charts/blob/main/charts/karavi-observability/values.yaml) that can be used for installation. This can be copied into a file named `myvalues.yaml` and either used as is or modified accordingly. + + __Note:__ + - The default `values.yaml` is configured to deploy the CSM for Observability Topology service on install. + - If CSM for Authorization is enabled for CSI PowerFlex, the `karaviMetricsPowerflex.authorization` parameters must be properly configured in your values file for CSM Observability. + + ```console + $ helm install karavi-observability dell/karavi-observability -n [CSM_NAMESPACE] -f myvalues.yaml + ``` + + Alternatively, you can specify each parameter using the '--set key=value[,key=value]' and/or '--set-file key=value[,key=value] arguments to 'helm install'. For example: + + ```console + $ helm install karavi-observability dell/karavi-observability -n [CSM_NAMESPACE] \ + --set-file karaviTopology.certificateFile= \ + --set-file karaviTopology.privateKeyFile= \ + --set-file otelCollector.certificateFile= \ + --set-file otelCollector.privateKeyFile= + ``` ## Configuration @@ -76,6 +91,9 @@ The following table lists the configurable parameters of the CSM for Observabili | `karaviMetricsPowerflex.volumePollFrequencySeconds` | The polling frequency (in seconds) to gather volume metrics | `10` | | `karaviMetricsPowerflex.storageClassPoolPollFrequencySeconds` | The polling frequency (in seconds) to gather storage class/pool metrics | `10` | | `karaviMetricsPowerflex.concurrentPowerflexQueries` | The number of simultaneous metrics queries to make to Powerflex(MUST be less than 10; otherwise, several request errors from Powerflex will ensue. | `10` | +| `karaviMetricsPowerflex.authorization.enabled` | [Authorization](../../../authorization) is an optional feature to apply credential shielding of the backend PowerFlex. | `false` | +| `karaviMetricsPowerflex.authorization.proxyHost` | Hostname of the csm-authorization server. | | +| `karaviMetricsPowerflex.authorization.skipCertificateValidation` | A boolean that enables/disables certificate validation of the csm-authorization server. | | | `karaviMetricsPowerflex.sdcMetricsEnabled` | Enable PowerFlex SDC Metrics Collection | `true` | | `karaviMetricsPowerflex.volumeMetricsEnabled` | Enable PowerFlex Volume Metrics Collection | `true` | | `karaviMetricsPowerflex.storageClassPoolMetricsEnabled` | Enable PowerFlex Storage Class/Pool Metrics Collection | `true` | @@ -97,22 +115,3 @@ The following table lists the configurable parameters of the CSM for Observabili | `karaviMetricsPowerstore.zipkin.uri` | URI of a Zipkin instance where tracing data can be forwarded | | | `karaviMetricsPowerstore.zipkin.serviceName` | Service name used for Zipkin tracing data | `metrics-powerstore`| | `karaviMetricsPowerstore.zipkin.probability` | Percentage of trace information to send to Zipkin (Valid range: 0.0 to 1.0) | `0` | - - -Specify each parameter using the '--set key=value[,key=value]' and/or '--set-file key=value[,key=value] arguments to 'helm install'. For example: - -```console -$ helm install karavi-observability dell/karavi-observability -n [CSM_NAMESPACE] --create-namespace \ - --set-file karaviTopology.certificateFile= \ - --set-file karaviTopology.privateKeyFile= \ - --set-file otelCollector.certificateFile= \ - --set-file otelCollector.privateKeyFile= -``` - -Alternatively, a YAML file that specifies the values for the above parameters can be provided while installing the chart. For example: - -```console -$ helm install karavi-observability dell/karavi-observability -n [CSM_NAMESPACE] --create-namespace -f values.yaml - ``` - -__Note__: You can use the default [values.yaml](https://github.com/dell/helm-charts/blob/main/charts/karavi-observability/values.yaml) \ No newline at end of file diff --git a/content/v3/observability/deployment/offline.md b/content/v3/observability/deployment/offline.md index 67d4948c4d..076921deb0 100644 --- a/content/v3/observability/deployment/offline.md +++ b/content/v3/observability/deployment/offline.md @@ -3,11 +3,16 @@ title: Offline Installer linktitle: Offline Installer weight: 3 description: > - Dell EMC Container Storage Modules (CSM) for Observability Offline Installer + Dell Container Storage Modules (CSM) for Observability Offline Installer --- The following instructions can be followed when a Helm chart will be installed in an environment that does not have an internet connection and will be unable to download the Helm chart and related Docker images. +## Prerequisites + +- Helm 3.3 +- The deployment of one or more [supported](../#supported-csi-drivers) Dell CSI drivers + ### Dependencies Multiple Linux-based systems may be required to create and process an offline bundle for use. @@ -125,6 +130,16 @@ To perform an offline installation of a Helm chart, the following steps should b [user@anothersystem /home/user/offline-karavi-observability-bundle/helm]# kubectl get secret vxflexos-config -n [CSI_DRIVER_NAMESPACE] -o yaml | sed 's/namespace: [CSI_DRIVER_NAMESPACE]/namespace: [CSM_NAMESPACE]/' | kubectl create -f - ``` + If [CSM for Authorization is enabled](../../../authorization/deployment/#configuring-a-dell-emc-csi-driver-with-csm-for-authorization) for CSI PowerFlex, perform the following steps: + + ``` + [user@anothersystem /home/user/offline-karavi-observability-bundle/helm]# kubectl get configmap vxflexos-config-params -n [CSI_DRIVER_NAMESPACE] -o yaml | sed 's/namespace: [CSI_DRIVER_NAMESPACE]/namespace: [CSM_NAMESPACE]/' | kubectl create -f - + ``` + + ``` + [user@anothersystem /home/user/offline-karavi-observability-bundle/helm]# kubectl get secret karavi-authorization-config proxy-server-root-certificate proxy-authz-tokens -n [CSI_DRIVER_NAMESPACE] -o yaml | sed 's/namespace: [CSI_DRIVER_NAMESPACE]/namespace: [CSM_NAMESPACE]/' | kubectl create -f - + ``` + CSI Driver for PowerStore ``` [user@anothersystem /home/user/offline-karavi-observability-bundle/helm]# kubectl get secret powerstore-config -n [CSI_DRIVER_NAMESPACE] -o yaml | sed 's/namespace: [CSI_DRIVER_NAMESPACE]/namespace: [CSM_NAMESPACE]/' | kubectl create -f - @@ -132,7 +147,10 @@ To perform an offline installation of a Helm chart, the following steps should b 4. Now that the required images have been made available and the Helm chart's configuration updated with references to the internal registry location, installation can proceed by following the instructions that are documented within the Helm chart's repository. - **Note:** Optionally, you could provide your own [configurations](../helm/#configuration). A sample values.yaml file is located [here](https://github.com/dell/helm-charts/blob/main/charts/karavi-observability/values.yaml). + **Note:** + - Optionally, you could provide your own [configurations](../helm/#configuration). A sample values.yaml file is located [here](https://github.com/dell/helm-charts/blob/main/charts/karavi-observability/values.yaml). + - The default `values.yaml` is configured to deploy the CSM for Observability Topology service on install. + - If CSM for Authorization is enabled for CSI PowerFlex, the `karaviMetricsPowerflex.authorization` parameters must be properly configured. ``` [user@anothersystem /home/user/offline-karavi-observability-bundle/helm]# helm install -n install-namespace app-name karavi-observability @@ -145,17 +163,4 @@ To perform an offline installation of a Helm chart, the following steps should b TEST SUITE: None ``` - -5. (Optional) The following steps can be performed to enable CSM for Observability to use an existing instance of Authorization for accessing the REST API for the given storage systems. - - **Note:** CSM for Authorization currently does not support the Observability module for PowerStore. - - Copy the proxy Secret into the CSM for Observability namespace: - ``` - [user@anothersystem /home/user/offline-karavi-observability-bundle/helm]# kubectl get secret proxy-authz-tokens -n [CSI_DRIVER_NAMESPACE] -o yaml | sed 's/namespace: [CSI_DRIVER_NAMESPACE]/namespace: [CSM_NAMESPACE]/' | kubectl create -f - - ``` - - Use `karavictl` to update the Observability module deployment to use the Authorization module. Required parameters are the location of the sidecar-proxy Docker image and the URL of the Authorization module proxy. If the Authorization module was installed using certificates, the flags `--insecure=false` and `--root-certificate ` must be also be provided. If certificates were not provided during installation, the flag `--insecure=true` must be provided. - ``` - [user@anothersystem /home/user/offline-karavi-observability-bundle/helm]# kubectl get secrets,deployments -n [CSM_NAMESPACE] -o yaml | karavictl inject --insecure=false --root-certificate --image-addr --proxy-host | kubectl apply -f - - ``` \ No newline at end of file + \ No newline at end of file diff --git a/content/v3/observability/deployment/online.md b/content/v3/observability/deployment/online.md index 8beca80c6a..60e83ef3a9 100644 --- a/content/v3/observability/deployment/online.md +++ b/content/v3/observability/deployment/online.md @@ -3,7 +3,7 @@ title: Installer linktitle: Installer weight: 3 description: > - Dell EMC Container Storage Modules (CSM) for Observability Installer + Dell Container Storage Modules (CSM) for Observability Installer --- Copying ConfigMap from vxflexos to karavi Success + | + |--> Copying Karavi Authorization Secrets from vxflexos to karavi Success | - |- Enabling Karavi Authorization for Karavi Observability Success + |- Installing Karavi Observability helm chart Success | |- Waiting for pods in namespace karavi to be ready Success ``` diff --git a/content/v3/observability/metrics/_index.md b/content/v3/observability/metrics/_index.md index e41fd14b7f..309ac13afc 100644 --- a/content/v3/observability/metrics/_index.md +++ b/content/v3/observability/metrics/_index.md @@ -3,7 +3,7 @@ title: Metrics linktitle: Metrics weight: 2 description: > - Dell EMC Container Storage Modules (CSM) for Observability Metrics + Dell Container Storage Modules (CSM) for Observability Metrics --- This section outlines the metrics collected by Container Storage Modules (CSM) for Observability in the areas of I/O Performance and Storage Capacity. All metrics are available from the OpenTelemetry collector endpoint. Please see the [CSM for Observability](../) for more information on deploying and configuring the OpenTelemetry collector. \ No newline at end of file diff --git a/content/v3/observability/metrics/powerflex.md b/content/v3/observability/metrics/powerflex.md index c1e2931407..0b9b11045e 100644 --- a/content/v3/observability/metrics/powerflex.md +++ b/content/v3/observability/metrics/powerflex.md @@ -3,7 +3,7 @@ title: PowerFlex Metrics linktitle: PowerFlex Metrics weight: 1 description: > - Dell EMC Container Storage Modules (CSM) for Observability PowerFlex Metrics + Dell Container Storage Modules (CSM) for Observability PowerFlex Metrics --- This section outlines the metrics collected by the Container Storage Modules (CSM) Observability module for PowerFlex. The [Grafana reference dashboards](https://github.com/dell/karavi-observability/blob/main/grafana/dashboards/powerflex) for PowerFlex metrics can be uploaded to your Grafana instance. diff --git a/content/v3/observability/metrics/powerstore.md b/content/v3/observability/metrics/powerstore.md index 77bc7f700c..3df657c10b 100644 --- a/content/v3/observability/metrics/powerstore.md +++ b/content/v3/observability/metrics/powerstore.md @@ -3,7 +3,7 @@ title: PowerStore Metrics linktitle: PowerStore Metrics weight: 1 description: > - Dell EMC Container Storage Modules (CSM) for Observability PowerStore Metrics + Dell Container Storage Modules (CSM) for Observability PowerStore Metrics --- This section outlines the metrics collected by the Container Storage Modules (CSM) Observability module for PowerStore. The [Grafana reference dashboards](https://github.com/dell/karavi-observability/blob/main/grafana/dashboards/powerstore) for PowerStore metrics can be uploaded to your Grafana instance. diff --git a/content/v3/observability/troubleshooting/_index.md b/content/v3/observability/troubleshooting/_index.md index 55a7939213..4c094c212d 100644 --- a/content/v3/observability/troubleshooting/_index.md +++ b/content/v3/observability/troubleshooting/_index.md @@ -13,6 +13,7 @@ Description: > 4. [How can I debug and troubleshoot issues with Kubernetes?](#how-can-i-debug-and-troubleshoot-issues-with-kubernetes) 5. [How can I troubleshoot latency problems with CSM for Observability?](#how-can-i-troubleshoot-latency-problems-with-csm-for-observability) 6. [Why does the Observability installation timeout with pods stuck in 'ContainerCreating'/'CrashLoopBackOff'/'Error' stage?](#why-does-the-observability-installation-timeout-with-pods-stuck-in-containercreatingcrashloopbackofferror-stage) +7. [Why do I see FailedMount warnings when describing pods in my cluster?](#why-do-i-see-failedmount-warnings-when-describing-pods-in-my-cluster) ### Why do I see a certificate problem when accessing the topology service outside of my Kubernetes cluster? @@ -233,3 +234,12 @@ error registering secret controller: no matches for kind "MutatingWebhookConfigu ``` If the Kubernetes cluster version is 1.22.2 (or higher), this error is due to an incompatible [cert-manager](https://github.com/jetstack/cert-manager) version. Please upgrade to the latest CSM for Observability release (v1.0.1 or higher). + +### Why do I see FailedMount warnings when describing pods in my cluster? + +The warning can arise when a self-signed certificate for otel-collector is issued. It takes a few minutes or less for the signed certificate to generate and be consumed in the namespace. Once the certificate is consumed, the FailedMount warnings are resolved and the containers start properly. +```console +[root@:~]$ kubectl describe pod -n $namespace $pod +MountVolume.SetUp failed for volume "tls-secret" : secret "otel-collector-tls" not found +Unable to attach or mount volumes: unmounted volumes=[tls-secret], unattached volumes=[vxflexos-config-params vxflexos-config tls-secret karavi-metrics-powerflex-configmap kube-api-access-4fqgl karavi-authorization-config proxy-server-root-certificate]: timed out waiting for the condition +``` \ No newline at end of file diff --git a/content/v3/observability/uninstall/_index.md b/content/v3/observability/uninstall/_index.md index 5b272bdaa3..296ebfa64c 100644 --- a/content/v3/observability/uninstall/_index.md +++ b/content/v3/observability/uninstall/_index.md @@ -3,7 +3,7 @@ title: Uninstallation linktitle: Uninstallation weight: 3 description: > - Dell EMC Container Storage Modules (CSM) for Observability Uninstallation + Dell Container Storage Modules (CSM) for Observability Uninstallation --- This section outlines the uninstallation steps for Container Storage Modules (CSM) for Observability. @@ -18,5 +18,5 @@ $ helm delete karavi-observability --namespace [CSM_NAMESPACE] You may also want to uninstall the CRDs created for cert-manager. ```console -$ kubectl delete -f https://github.com/jetstack/cert-manager/releases/download/v1.5.3/cert-manager.crds.yaml +$ kubectl delete -f https://github.com/jetstack/cert-manager/releases/download/v1.6.1/cert-manager.crds.yaml ``` diff --git a/content/v3/observability/upgrade/_index.md b/content/v3/observability/upgrade/_index.md index b8eafe9dc9..a44d38c615 100644 --- a/content/v3/observability/upgrade/_index.md +++ b/content/v3/observability/upgrade/_index.md @@ -3,27 +3,27 @@ title: Upgrade linktitle: Upgrade weight: 3 description: > - Dell EMC Container Storage Modules (CSM) for Observability Upgrade + Dell Container Storage Modules (CSM) for Observability Upgrade --- -CSM for Observability can only be upgraded via the Helm chart following the instructions below. - -CSM for Observability Helm upgrade can be used if the initial deployment was performed using the [Helm chart](../deployment/helm) or [Online Installer](../deployment/online). - ->Note: The [Offline Installer](../deployment/offline) does not support upgrade. +This section outlines the upgrade steps for Container Storage Modules (CSM) for Observability. CSM for Observability upgrade can be achieved in one of two ways: +- Helm Chart Upgrade +- Online Installer Upgrade ## Helm Chart Upgrade +CSM for Observability Helm upgrade supports [Helm](../deployment/helm), [Online Installer](../deployment/online), and [Offline Installer](../deployment/offline) deployments. + To upgrade an existing Helm installation of CSM for Observability to the latest release, download the latest Helm charts. -```console +``` helm repo update ``` Check if the latest Helm chart version is available: -```console +``` helm search repo dell NAME CHART VERSION APP VERSION DESCRIPTION dell/karavi-observability 1.0.1 1.0.0 CSM for Observability is part of the [Container... @@ -33,8 +33,50 @@ dell/karavi-observability 1.0.1 1.0.0 CSM for Observab Upgrade to the latest CSM for Observability release: -```console -$ helm upgrade --version $latest_chart_version --values values.yaml karavi-observability dell/karavi-observability -n $namespace ``` +Upgrade Helm and Online Installer deployments: + + $ helm upgrade --version $latest_chart_version --values values.yaml karavi-observability dell/karavi-observability -n $namespace + +Upgrade Offline Installer deployment: + + $ helm upgrade --version $latest_chart_version karavi-observability dell/karavi-observability -n $namespace +``` + +The [configuration](../deployment/helm#configuration) section lists all the parameters that can be configured using the `values.yaml` file. + +## Online Installer Upgrade + +CSM for Observability online installer upgrade can be used if the initial deployment was performed using the [Online Installer](../deployment/online) or [Helm](../deployment/helm). + +1. Change to the installer directory: + ``` + [user@system /home/user]# cd karavi-observability/installer + ``` +2. Update `values.yaml` file as needed. Configuration options are outlined in the [Helm chart deployment section](../deployment/helm#configuration). -The [configuration](../deployment/helm#configuration) section lists all the parameters that can be configured using the values.yaml file. \ No newline at end of file +2. Execute the `./karavi-observability-install.sh` script: + ``` + [user@system /home/user/karavi-observability/installer]# ./karavi-observability-install.sh upgrade --namespace $namespace --values myvalues.yaml --version $latest_chart_version + --------------------------------------------------------------------------------- + > Upgrading Karavi Observability in namespace karavi on 1.21 + --------------------------------------------------------------------------------- + | + |- Karavi Observability is installed. Upgrade can continue Success + | + |- Verifying Kubernetes versions + | + |--> Verifying minimum Kubernetes version Success + | + |--> Verifying maximum Kubernetes version Success + | + |- Verifying helm version Success + | + |- Upgrading CertManager CRDs Success + | + |- Updating helm repositories Success + | + |- Upgrading Karavi Observability helm chart Success + | + |- Waiting for pods in namespace karavi to be ready Success + ``` diff --git a/content/v1/policies/_index.md b/content/v3/policies/_index.md similarity index 100% rename from content/v1/policies/_index.md rename to content/v3/policies/_index.md diff --git a/content/v3/policies/deprecationpolicy/_index.md b/content/v3/policies/deprecationpolicy/_index.md new file mode 100644 index 0000000000..19a4783ba1 --- /dev/null +++ b/content/v3/policies/deprecationpolicy/_index.md @@ -0,0 +1,31 @@ +--- +title: "Deprecation Policy" +linkTitle: "Deprecation Policy" +weight: 1 +Description: > + Dell Technologies (Dell) Container Storage Modules (CSM) Deprecation Policy +--- + +The Deprecation policy for Dell Container Storage Modules (CSM) is in place to help users prevent any disruptive incidents from occurring. We aim to provide appropriate notice when CLI elements, APIs, features, or behaviors are slated to be removed. + +### Deprecating a CLI Element + +This captures situations when a flag or command is removed from a CLI. + +CLI elements must function after their announced deprecation for no less than two releases. This includes when the releases become Generally Available (GA), including both major or minor release versions. + +When deprecating a CLI command, a warning message must be displayed each time the command is used.  This warning message should capture the deprecation details along with the release in which the command that is being deprecated will be removed. + +### Deprecating an API, Feature, or Behavior + +CSM features must function after their announced deprecation for no less than two releases. This includes when the releases become Generally Available (GA), including both major or minor release versions. + +### Tech Previews + +Features released as tech preview are not supported and therefore are not intended for production.  No deprecation notice will be required before removing any features/behaviors that are released as tech previews. + +### Required Deprecation Notice + +CSM documentation for the release in which the deprecation is being announced must include deprecation details along with the release in which the item(s) being deprecated will be removed. + +In addition, the changelog and release notes for the release in which the deprecation is being announced must contain a section titled "Important Deprecation Information".  In this section, the deprecation details must be provided along with the release in which the item(s) being deprecated will be removed. diff --git a/content/v3/replication/_index.md b/content/v3/replication/_index.md index 515872d1de..fe7de3d6dd 100644 --- a/content/v3/replication/_index.md +++ b/content/v3/replication/_index.md @@ -3,11 +3,11 @@ title: "Replication" linkTitle: "Replication" weight: 6 Description: > - Dell EMC Container Storage Modules (CSM) for Replication + Dell Container Storage Modules (CSM) for Replication --- -[Container Storage Modules](https://github.com/dell/csm) (CSM) for Replication is part of the open-source suite of Kubernetes storage enablers for Dell EMC products. +[Container Storage Modules](https://github.com/dell/csm) (CSM) for Replication is part of the open-source suite of Kubernetes storage enablers for Dell products. -CSM for Replication project aims to bring Replication & Disaster Recovery capabilities of Dell EMC Storage Arrays to Kubernetes clusters. +CSM for Replication project aims to bring Replication & Disaster Recovery capabilities of Dell Storage Arrays to Kubernetes clusters. It helps you replicate groups of volumes using the native replication technology available on the storage array and can provide you a way to restart applications in case of both planned and unplanned migration. @@ -18,32 +18,32 @@ CSM for Replication provides the following capabilities: {{}} | Capability | PowerScale | Unity | PowerStore | PowerFlex | PowerMax | | - | :-: | :-: | :-: | :-: | :-: | -| Replicate data using native storage array based replication | no | no | yes | no | yes | -| Create `PersistentVolume` objects in the cluster representing the replicated volume | no | no | yes | no | yes | -| Create `DellCSIReplicationGroup` objects in the cluster | no | no | yes | no | yes | -| Failover & Reprotect applications using the replicated volumes | no | no | yes | no | yes | -| Provides a command line utility - [repctl](tools) for configuring & managing replication related resources across multiple clusters | no | no | yes | no | yes | +| Replicate data using native storage array based replication | yes | no | yes | no | yes | +| Create `PersistentVolume` objects in the cluster representing the replicated volume | yes | no | yes | no | yes | +| Create `DellCSIReplicationGroup` objects in the cluster | yes | no | yes | no | yes | +| Failover & Reprotect applications using the replicated volumes | yes | no | yes | no | yes | +| Provides a command line utility - [repctl](tools) for configuring & managing replication related resources across multiple clusters | yes | no | yes | no | yes | {{
}} ## Supported Operating Systems/Container Orchestrator Platforms {{}} -| COP/OS | PowerMax | PowerStore | -|-|-|-| -| Kubernetes | 1.20, 1.21, 1.22 | 1.20, 1.21, 1.22 | -| Red Hat OpenShift | X | 4.8, 4.9 | -| RHEL | 7.x, 8.x | 7.x, 8.x | -| CentOS | 7.8, 7.9 | 7.8, 7.9 | -| Ubuntu | 20.04 | 20.04 | -| SLES | 15SP2 | 15SP2 | +| COP/OS | PowerMax | PowerStore | PowerScale | +|-|-|-|-| +| Kubernetes | 1.21, 1.22, 1.23 | 1.21, 1.22, 1.23 | 1.21, 1.22, 1.23| +| Red Hat OpenShift | 4.8, 4.9 | 4.8, 4.9 | 4.8, 4.9 | +| RHEL | 7.x, 8.x | 7.x, 8.x | 7.x, 8.x | +| CentOS | 7.8, 7.9 | 7.8, 7.9 | 7.8, 7.9 | +| Ubuntu | 20.04 | 20.04 | 20.04 | +| SLES | 15SP2 | 15SP2 | 15SP2 | {{
}} ## Supported Storage Platforms {{}} -| | PowerMax | PowerStore | -|---------------|:-------------------:|:----------------:| -| Storage Array | 5978.479.479, 5978.669.669, 5978.711.711, Unisphere 9.2 | 1.0.x, 2.0.x | +| | PowerMax | PowerStore | PowerScale | +|---------------|:-------------------:|:----------------:|:----------------:| +| Storage Array | 5978.479.479, 5978.711.711, Unisphere 9.2 | 1.0.x, 2.0.x, 2.1.x | OneFS 8.1, 8.2, 9.0, 9.1, 9.2, 9.3 | {{
}} ## Supported CSI Drivers @@ -52,8 +52,9 @@ CSM for Replication supports the following CSI drivers and versions. {{}} | Storage Array | CSI Driver | Supported Versions | | ------------- | ---------- | ------------------ | -| CSI Driver for Dell EMC PowerMax | [csi-powermax](https://github.com/dell/csi-powermax) | v2.0, v2.1 | -| CSI Driver for Dell EMC PowerStore | [csi-powerstore](https://github.com/dell/csi-powerstore) | v2.0, v2.1 | +| CSI Driver for Dell PowerMax | [csi-powermax](https://github.com/dell/csi-powermax) | v2.0, v2.1, v2.2 | +| CSI Driver for Dell PowerStore | [csi-powerstore](https://github.com/dell/csi-powerstore) | v2.0, v2.1, v2.2 | +| CSI Driver for Dell PowerScale | [csi-powerscale](https://github.com/dell/csi-powerscale) | v2.2 | {{
}} ## Details @@ -68,30 +69,38 @@ You can also use a single stretched Kubernetes cluster for protecting your appli the objects still exist in pairs. ### What it does not do -* Replicate application manifests within/across clusters -* Stop applications before the planned/unplanned migration -* Start applications after the migration -* Replicate `PersistentVolumeClaim` objects within/across clusters +* Replicate application manifests within/across clusters. +* Stop applications before the planned/unplanned migration. +* Start applications after the migration. +* Replicate `PersistentVolumeClaim` objects within/across clusters. +* Replication with METRO mode does not need Replicator sidecar and common controller. ### CSM for Replication Module Capabilities CSM for Replication provides the following capabilities: +{{}} | Capability | PowerMax | PowerStore | PowerScale | PowerFlex | Unity | -| - | :-: | :-: | :-: | :-: | :-: | -| Asynchronous replication of PVs accross K8s clusters | yes | yes | no | no | no | -| Synchronous replication of PVs accross K8s clusters | yes | no | no | no | no | -| Single cluster (stretched) mode replication | yes | yes | no | no | no | -| Replication actions (failover, reprotect) | yes | yes | no | no | no | +| ---------| -------- | -------- | -------- | -------- | -------- | +| Asynchronous replication of PVs accross K8s clusters | yes | yes | yes | no | no | +| Synchronous replication of PVs accross K8s clusters | yes | no | no | no | no | +| Single cluster (stretched) mode replication | yes | yes | yes | no | no | +| Replication actions (failover, reprotect) | yes | yes | yes | no | no | +{{
}} ### Supported Platforms -The following matrix provides a list of all supported versions for each Dell EMC Storage product. +The following matrix provides a list of all supported versions for each Dell Storage product. + +| Platforms | PowerMax | PowerStore | PowerScale | +| -------- | --------- | ---------- | ---------- | +| Kubernetes | 1.21, 1.22, 1.23 | 1.21, 1.22, 1.23 | 1.21, 1.22, 1.23 | +| CSI Driver | 2.x | 2.x | 2.2+ | -| Platforms | PowerMax | PowerStore | -| -------- | --------- | --------- | -| Kubernetes | 1.20, 1.21, 1.22 | 1.20, 1.21, 1.22 | -| CSI Driver | 2.x | 2.x | +| Platforms | PowerMax | PowerStore | PowerScale | +| -------- | --------- | ---------- | ---------- | +| RedHat Openshift |4.8, 4.9 | 4.8, 4.9 | 4.8, 4.9 | +| CSI Driver | 2.2+ | 2.x | 2.2+ | For compatibility with storage arrays please refer to corresponding [CSI drivers](../csidriver/#features-and-capabilities) diff --git a/content/v3/replication/deployment/_index.md b/content/v3/replication/deployment/_index.md index ab4895e042..7a0af6d942 100644 --- a/content/v3/replication/deployment/_index.md +++ b/content/v3/replication/deployment/_index.md @@ -3,5 +3,5 @@ title: "Deployment" linkTitle: "Deployment" weight: 1 Description: > - Installation for Dell EMC Container Storage Module (CSM) for Replication + Installation for Dell Container Storage Module (CSM) for Replication --- diff --git a/content/v3/replication/deployment/configmap-secrets.md b/content/v3/replication/deployment/configmap-secrets.md index 22e4048c83..677a309e7a 100644 --- a/content/v3/replication/deployment/configmap-secrets.md +++ b/content/v3/replication/deployment/configmap-secrets.md @@ -15,7 +15,8 @@ You need to create secrets (using either of the two methods) in each cluster inv the respective CSM Replication Controllers. >Important: Direct network visibility between clusters required for CSM-Replication to work. -> Cluster-1's API URL has to be pingable from cluster-2 pods and vice versa. +> Cluster-1's API URL has to be pingable from cluster-2 pods and vice versa. If private networks are used and/or DNS is not set up properly - you may need to modify `/etc/hosts` file from within controller's pod. +> This can be achieved by using helm installation method. Refer to the [link](../installation/#using-the-installation-script) >Note: If you are using a single stretched cluster, then you can skip all the following steps diff --git a/content/v3/replication/deployment/installation.md b/content/v3/replication/deployment/installation.md index 8ff0a997b5..3a30e17f5e 100644 --- a/content/v3/replication/deployment/installation.md +++ b/content/v3/replication/deployment/installation.md @@ -47,6 +47,12 @@ kubectl create ns dell-replication-controller cp ../helm/csm-replication/values.yaml ./myvalues.yaml bash scripts/install.sh --values ./myvalues.yaml ``` +>Note: Current installation method allows you to specify custom `:` entry to be appended to controller's `/etc/hosts` file. It can be useful if controller is being deployed in private environment where DNS is not set up properly, but kubernetes clusters use FQDN as API server's address. +> The feature can be enabled by modifying `values.yaml`. +>``` hostAliases: +> enableHostAliases: true +> hostName: "foo.bar" +> ip: "10.10.10.10" This script will do the following: 1. Install `DellCSIReplicationGroup` CRD in your cluster @@ -65,9 +71,9 @@ After the installation ConfigMap will consist of only the `logLevel` field, to a The following CSI drivers support replication: 1. CSI driver for PowerMax 2. CSI driver for PowerStore +3. CSI driver for PowerScale -Please follow the steps outlined [here](../powermax) for enabling replication for PowerMax & [here](../powerstore) for PowerStore during -the driver installation. +Please follow the steps outlined in [PowerMax](../powermax), [PowerStore](../powerstore) or [PowerScale](../powerscale) pages during the driver installation. >Note: Please ensure that replication CRDs are installed in the clusters where you are installing the CSI drivers. These CRDs are generally installed as part of the CSM Replication controller installation process. diff --git a/content/v3/replication/deployment/powermax.md b/content/v3/replication/deployment/powermax.md index 34133572fb..2d9fca7e0a 100644 --- a/content/v3/replication/deployment/powermax.md +++ b/content/v3/replication/deployment/powermax.md @@ -8,7 +8,7 @@ description: Enabling Replication feature for CSI PowerMax Container Storage Modules (CSM) Replication sidecar is a helper container that is installed alongside a CSI driver to facilitate replication functionality. Such CSI drivers must implement `dell-csi-extensions` calls. -CSI driver for Dell EMC PowerMax supports necessary extension calls from `dell-csi-extensions`. To be able to provision replicated volumes you would need to do the steps described in the following sections. +CSI driver for Dell PowerMax supports necessary extension calls from `dell-csi-extensions`. To be able to provision replicated volumes you would need to do the steps described in the following sections. ### Before Installation @@ -84,8 +84,7 @@ You can create them manually or with help from `repctl`. #### Manual Storage Class Creation -You can find sample replication enabled storage class in the driver repository -at `./samples/storageclass/powermax_srdf.yaml`. +You can find sample replication enabled storage class in the driver repository [here](https://github.com/dell/csi-powermax/blob/main/samples/storageclass/powermax_srdf.yaml). It will look like this: ```yaml @@ -197,7 +196,7 @@ your Kubernetes clusters with `kubectl`. (using a single storage class configuration) in one command. To create storage classes with `repctl` you need to fill up the config with necessary information. -You can find an example in `repctl/examples/powermax_example_values.yaml`, copy it, and modify it to your needs. +You can find an example [here](https://github.com/dell/csm-replication/blob/main/repctl/examples/powermax_example_values.yaml), copy it, and modify it to your needs. If you open this example you can see a lot of similar fields and parameters you can modify in the storage class. @@ -231,7 +230,7 @@ added your clusters to repctl via the `add` command before. To create storage classes just run `./repctl create sc --from-config ` and storage classes would be applied to both clusters. -After creating storage classes you can make sure they are in place by using `./repctl list storageclasses` command. +After creating storage classes you can make sure they are in place by using `./repctl get storageclasses` command. ### Provisioning Replicated Volumes diff --git a/content/v3/replication/deployment/powerscale.md b/content/v3/replication/deployment/powerscale.md new file mode 100644 index 0000000000..4133be27e2 --- /dev/null +++ b/content/v3/replication/deployment/powerscale.md @@ -0,0 +1,188 @@ +--- +title: PowerScale +linktitle: PowerScale +weight: 7 +description: > + Enabling Replication feature for CSI PowerScale +--- +## Enabling Replication in CSI PowerScale + +Container Storage Modules (CSM) Replication sidecar is a helper container that is installed alongside a CSI driver to facilitate replication functionality. Such CSI drivers must implement `dell-csi-extensions` calls. + +CSI driver for Dell PowerScale supports necessary extension calls from `dell-csi-extensions`. To be able to provision replicated volumes you would need to do the steps described in the following sections. + +### Before Installation + +#### On Storage Array +Ensure that SyncIQ service is enabled on both arrays, you can do that by navigating to `SyncIQ` section under `Data protection` tab. + +The current implementation supports one-to-one replication so you need to ensure that one array can reach another and vice versa. + +##### SyncIQ encryption + +If you wish to use `SyncIQ` encryption you should ensure that you've added a server certificate first by navigating to `Data protection->SyncIQ->Settings`. + +After adding the certificate, you can choose to use it by checking `Encrypt SyncIQ connection` from the dropdown. + +After that, you can add similar certificates of other arrays in `SyncIQ-> Certificates`, and ensure you've added the certificate of the array you want to replicate to. + +Similar steps should be done in the reverse direction, so `array-1` has the `array-2` certificate visible in `SyncIQ-> Certificates` tab and `array-2` has the `array-1` certificate visible in its own `SyncIQ->Certificates` tab. + +#### In Kubernetes +Ensure you installed CRDs and replication controller in your clusters. + +To verify you have everything in order you can execute the following commands: + +* Check controller pods + ```shell + kubectl get pods -n dell-replication-controller + ``` + Pods should be `READY` and `RUNNING` +* Check that controller config map is properly populated + ```shell + kubectl get cm -n dell-replication-controller dell-replication-controller-config -o yaml + ``` + `data` field should be properly populated with cluster-id of your choosing and, if using multi-cluster + installation, your `targets:` parameter should be populated by a list of target clusters IDs. + + +If you don't have something installed or something is out-of-place, please refer to installation instructions in [installation-repctl](../install-repctl) or [installation](../installation). + +### Installing Driver With Replication Module + +To install the driver with replication enabled, you need to ensure you have set +helm parameter `controller.replication.enabled` in your copy of example `values.yaml` file +(usually called `my-isilon-settings.yaml`, `myvalues.yaml` etc.). + +Here is an example of what that would look like: +```yaml +... +# controller: configure controller specific parameters +controller: + ... + # replication: allows to configure replication + replication: + enabled: true + image: dellemc/dell-csi-replicator:v1.2.0 + replicationContextPrefix: "powerscale" + replicationPrefix: "replication.storage.dell.com" +... +``` +You can leave other parameters like `image`, `replicationContextPrefix`, and `replicationPrefix` as they are. + +After enabling the replication module, you can continue to install the CSI driver for PowerScale following the usual installation procedure. Just ensure you've added the necessary array connection information to secret. + +##### SyncIQ encryption + +If you plan to use encryption, you need to set `replicationCertificateID` in the array connection secret. To check the ID of the certificate for the cluster, you can navigate to `Data protection->SyncIQ->Settings,` find your certificate in the `Server Certificates` section and then push the `View/Edit` button. It will open a dialog that should contain the `Id` field. Use the value of that field to set `replicationCertificateID`. + +> **_NOTE:_** you need to install your driver on ALL clusters where you want to use replication. Both arrays must be accessible from each cluster. + + +### Creating Storage Classes + +To provision replicated volumes, you need to create adequately configured storage classes on both the source and target clusters. + +A pair of storage classes on the source, and target clusters would be essentially `mirrored` copies of one another. +You can create them manually or with the help of `repctl`. + +#### Manual Storage Class Creation + +You can find a sample replication enabled storage class in the driver repository [here](https://github.com/dell/csi-powerscale/blob/main/samples/storageclass/isilon-replication.yaml). + +It will look like this: +```yaml +apiVersion: storage.k8s.io/v1 +kind: StorageClass +metadata: + name: isilon-replication +provisioner: csi-isilon.dellemc.com +reclaimPolicy: Delete +allowVolumeExpansion: true +volumeBindingMode: Immediate +parameters: + replication.storage.dell.com/isReplicationEnabled: "true" + replication.storage.dell.com/remoteStorageClassName: "isilon-replication" + replication.storage.dell.com/remoteClusterID: "target" + replication.storage.dell.com/remoteSystem: "cluster-2" + replication.storage.dell.com/rpo: Five_Minutes + replication.storage.dell.com/ignoreNamespaces: "false" + replication.storage.dell.com/volumeGroupPrefix: "csi" + AccessZone: System + IsiPath: /ifs/data/csi + RootClientEnabled: "false" + ClusterName: cluster-1 +``` + +Let's go through each parameter and what it means: +* `replication.storage.dell.com/isReplicationEnabled` if set to `true`, will mark this storage class as replication enabled, + just leave it as `true`. +* `replication.storage.dell.com/remoteStorageClassName` points to the name of the remote storage class. If you are using replication with the multi-cluster configuration you can make it the same as the current storage class name. +* `replication.storage.dell.com/remoteClusterID` represents the ID of a remote cluster. It is the same id you put in the replication controller config map. +* `replication.storage.dell.com/remoteSystem` is the name of the remote system that should match whatever `clusterName` you called it in `isilon-creds` secret. +* `replication.storage.dell.com/rpo` is an acceptable amount of data, which is measured in units of time, that may be lost due to a failure. +> NOTE: Available RPO values "Five_Minutes", "Fifteen_Minutes", "Thirty_Minutes", "One_Hour", "Six_Hours", "Twelve_Hours", "One_Day" +* `replication.storage.dell.com/ignoreNamespaces`, if set to `true` PowerScale driver, it will ignore in what namespace volumes are created and put every volume created using this storage class into a single volume group. +* `replication.storage.dell.com/volumeGroupPrefix` represents what string would be appended to the volume group name to differentiate them. +* `Accesszone` is the name of the access zone a volume can be created in +* `IsiPath` is the base path for the volumes to be created on the PowerScale cluster +* `RootClientEnabled` determines whether the driver should enable root squashing or not +* `ClusterName` name of PowerScale cluster, where PV will be provisioned, specified as it was listed in `isilon-creds` secret. + +After figuring out how storage classes would look, you just need to go and apply them to your Kubernetes clusters with `kubectl`. + +#### Storage Class creation with `repctl` + +`repctl` can simplify storage class creation by creating a pair of mirrored storage classes in both clusters +(using a single storage class configuration) in one command. + +To create storage classes with `repctl` you need to fill up the config with necessary information. +You can find an example [here](https://github.com/dell/csm-replication/blob/main/repctl/examples/powerscale_example_values.yaml), copy it, and modify it to your needs. + +If you open this example you can see a lot of similar fields and parameters you can modify in the storage class. + +Let's use the same example from manual installation and see what config would look like: +```yaml +sourceClusterID: "source" +targetClusterID: "target" +name: "isilon-replication" +driver: "isilon" +reclaimPolicy: "Delete" +replicationPrefix: "replication.storage.dell.com" +parameters: + rpo: "Five_Minutes" + ignoreNamespaces: "false" + volumeGroupPrefix: "csi" + accessZone: "System" + isiPath: "/ifs/data/csi" + rootClientEnabled: "false" + clusterName: + source: "cluster-1" + target: "cluster-2" +``` + +> NOTE: both storage classes expected to use access zone with same name + +After preparing the config, you can apply it to both clusters with `repctl`. Before you do this, ensure you've added your clusters to `repctl` via the `add` command. + +To create storage classes just run `./repctl create sc --from-config ` and storage classes would be applied to both clusters. + +After creating storage classes you can make sure they are in place by using `./repctl get storageclasses` command. + +### Provisioning Replicated Volumes + +After installing the driver and creating storage classes, you are good to create volumes using newly +created storage classes. + +On your source cluster, create a PersistentVolumeClaim using one of the replication-enabled Storage Classes. +The CSI PowerScale driver will create a volume on the array, add it to a VolumeGroup and configure replication +using the parameters provided in the replication enabled Storage Class. + +### Supported Replication Actions +The CSI PowerScale driver supports the following list of replication actions: +- FAILOVER_REMOTE +- UNPLANNED_FAILOVER_LOCAL +- REPROTECT_LOCAL +- SUSPEND +- RESUME +- SYNC diff --git a/content/v3/replication/deployment/powerstore.md b/content/v3/replication/deployment/powerstore.md index 5541b01e88..c7bf44721d 100644 --- a/content/v3/replication/deployment/powerstore.md +++ b/content/v3/replication/deployment/powerstore.md @@ -7,11 +7,9 @@ description: > --- ## Enabling Replication In CSI PowerStore -For the Container Storage Modules (CSM) for Replication sidecar container to work properly it needs to be installed -alongside CSI driver that supports replication `dell-csi-extensions` calls. +Container Storage Modules (CSM) Replication sidecar is a helper container that is installed alongside a CSI driver to facilitate replication functionality. Such CSI drivers must implement `dell-csi-extensions` calls. -CSI driver for Dell EMC PowerStore supports necessary extension calls from `dell-csi-extensions` and to be able to -provision replicated volumes you would need to do the steps described in the following sections. +CSI driver for Dell PowerStore supports necessary extension calls from `dell-csi-extensions`. To be able to provision replicated volumes you would need to do the steps described in the following sections. ### Before Installation @@ -84,7 +82,7 @@ You can create them manually or with help from `repctl`. #### Manual Storage Class Creation You can find sample replication enabled storage class in the driver repository -at `./samples/storageclass/powerstore-replication.yaml`. +[here](https://github.com/dell/csi-powerstore/blob/main/samples/storageclass/powerstore-replication.yaml). It will look like this: ```yaml @@ -179,7 +177,7 @@ your Kubernetes clusters with `kubectl`. (using a single storage class configuration) in one command. To create storage classes with `repctl` you need to fill up the config with necessary information. -You can find an example in `repctl/examples/powerstore_example_values.yaml`, copy it, and modify it to your needs. +You can find an example in [here](https://github.com/dell/csm-replication/blob/main/repctl/examples/powerstore_example_values.yaml), copy it, and modify it to your needs. If you open this example you can see a lot of similar fields and parameters you can modify in the storage class. @@ -209,7 +207,7 @@ added your clusters to repctl via the `add` command before. To create storage classes just run `./repctl create sc --from-config ` and storage classes would be applied to both clusters. -After creating storage classes you can make sure they are in place by using `./repctl list storageclasses` command. +After creating storage classes you can make sure they are in place by using `./repctl get storageclasses` command. ### Provisioning Replicated Volumes diff --git a/content/v3/replication/deployment/storageclasses.md b/content/v3/replication/deployment/storageclasses.md index b0eaaaffb2..df85a44833 100644 --- a/content/v3/replication/deployment/storageclasses.md +++ b/content/v3/replication/deployment/storageclasses.md @@ -29,8 +29,7 @@ This should contain the name of the storage class on the remote cluster which is >Note: You still need to create a pair of storage classes even while using a single stretched cluster ### Driver specific parameters -Please refer to the driver specific sections for [PowerMax](../powermax/#creating-storage-classes) & [PowerStore](../powerstore/#creating-storage-classes) for a detailed -list of parameters. +Please refer to the driver specific sections for [PowerMax](../powermax/#creating-storage-classes), [PowerStore](../powerstore/#creating-storage-classes) or [PowerScale](../powerscale/#creating-storage-classes) for a detailed list of parameters. ### PV sync Deletion diff --git a/content/v3/replication/disaster-recovery.md b/content/v3/replication/disaster-recovery.md index 26abda105a..e66b5d9b9a 100644 --- a/content/v3/replication/disaster-recovery.md +++ b/content/v3/replication/disaster-recovery.md @@ -15,35 +15,32 @@ This scenario is the choice when you want to try your disaster recovery plan or a. Execute "failover" action on selected ReplicationGroup using the cluster name - ./repctl --rg rg-id failover --to-cluster target-cluster-name + ./repctl --rg rg-id failover --target target-cluster-name b. Execute "reprotect" action on selected ReplicationGroup which will resume the replication from new "source" - ./repctl --rg rg-id reprotect --to-cluster new-source-cluster-name -

- ![state_changes1](../state_changes1.png) -

+ ./repctl --rg rg-id reprotect --at new-source-cluster-name + +![state_changes1](../state_changes1.png) ### Unplanned Migration to the target cluster/array This scenario is the choice when you lost a site. a. Execute "failover" action on selected ReplicationGroup using the cluster name - ./repctl --rg rg-id failover --to-cluster target-cluster-name --unplanned + ./repctl --rg rg-id failover --target target-cluster-name --unplanned b. Execute "swap" action on selected ReplicationGroup which would swap personalities of R1 and R2 (only applicable for PowerMax driver) - ./repctl --rg rg-id swap --to-cluster target-cluster-name + ./repctl --rg rg-id swap --at target-cluster-name **Note:** Unplanned migration usually happens when the original "source" cluster is unavailable. The following action makes sense when the cluster is back. c. Execute "reprotect" action on selected ReplicationGroup which will resume the replication. - ./repctl --rg rg-id reprotect --to-cluster new-source-cluster-name + ./repctl --rg rg-id reprotect --at new-source-cluster-name -

- ![state_changes2](../state_changes2.png) -

+![state_changes2](../state_changes2.png) >Note: When users do Failover and Failback, the tests pods on the source cluster may go "CrashLoopOff" state since it will try to remount the same volume which is already mounted. To get around this problem bring down the number of replicas to 0 and then after that is done, bring it up to 1. diff --git a/content/v3/replication/replication-actions.md b/content/v3/replication/replication-actions.md index f472a99830..fa9502265c 100644 --- a/content/v3/replication/replication-actions.md +++ b/content/v3/replication/replication-actions.md @@ -6,7 +6,7 @@ description: > DellCSIReplicationGroup Actions --- -You can exercise native replication control operations from Dell EMC storage arrays by performing "Actions" on the replicated group of volumes using the DellCSIReplicationGroup object. +You can exercise native replication control operations from Dell storage arrays by performing "Actions" on the replicated group of volumes using the DellCSIReplicationGroup object. You can patch the DellCSIReplicationGroup Custom Resource and set the action field in the spec to one of the allowed values (refer to tables in this document). @@ -28,28 +28,31 @@ Any action with the __LOCAL__ suffix means, do this action for the local site. A For e.g. - * If the CR at `Hopkinton` is patched with action FAILOVER_REMOTE, it means that the driver will attempt to `Fail Over` to __Durham__ which is the remote site. -* If the CR at `Durham` is patched with action FAILOVER_LOCAL, it means that the driver will attempt to `Fail over` to __Durham__ which is the local site. +* If the CR at `Durham` is patched with action FAILOVER_LOCAL, it means that the driver will attempt to `Fail Over` to __Durham__ which is the local site. * If the CR at `Durham` is patched with REPROTECT_LOCAL, it means that the driver will `Re-protect` the volumes at __Durham__ which is the local site. The following table lists details of what actions should be used in different Disaster Recovery workflows & the equivalent operation done on the storage array: -| Workflow | Actions | PowerMax | PowerStore | -| ----------- | --------- | -------------- | ---------- | -| Planned Migration | FAILOVER_LOCAL
FAILOVER_REMOTE | symrdf failover -swap | FAILOVER (no REPROTECT after FAILOVER) | -| Reprotect | REPROTECT_LOCAL
REPROTECT_REMOTE | symrdf resume/est | REPROTECT | -| Unplanned Migration | UNPLANNED_FAILOVER_LOCAL
UNPLANNED_FAILOVER_REMOTE | symrdf failover -force | FAILOVER (at target site) | +{{}} +| Workflow | Actions | PowerMax | PowerStore | PowerScale | +| ------------------- | ----------------------------------- | --------------------- | -------------------------------------- | ---------------------------------------------- | +| Planned Migration | FAILOVER_LOCAL
FAILOVER_REMOTE | symrdf failover -swap | FAILOVER (no REPROTECT after FAILOVER) | allow_writes on target, disable local policy | +| Reprotect | REPROTECT_LOCAL
REPROTECT_REMOTE | symrdf resume/est | REPROTECT | enable local policy, disallow_writes on remote | +| Unplanned Migration | UNPLANNED_FAILOVER_LOCAL
UNPLANNED_FAILOVER_REMOTE | symrdf failover -force | FAILOVER (at target site) | break association on target | +{{
}} ### Maintenance Actions These actions can be run at any site and are used to change the replication link state for maintenance activities. The following table lists the supported maintenance actions and the equivalent operation done on the storage arrays {{}} -| Action | Description | PowerMax | PowerStore | -|-----------|------------------------|-------------------|-------------------| -| SUSPEND | Temporarily suspend
replication | symrdf suspend | PAUSE | -| RESUME | Resume replication | symrdf resume | RESUME | -| SYNC | Synchronize all changes
from source to target | symrdf establish | SYNCHRONIZE NOW | +| Action | Description | PowerMax | PowerStore | PowerScale | +|-----------|--------------------------------------|----------------|------------|----------------------| +| SUSPEND | Temporarily suspend
replication | symrdf suspend | PAUSE | disable local policy | +| RESUME | Resume replication | symrdf resume | RESUME | enable local policy | +| SYNC | Synchronize all changes
from source to target | symrdf establish | SYNCHRONIZE NOW | start syncIQ job | {{
}} + ### How to perform actions We strongly recommend using `repctl` to perform any actions on `DellCSIReplicationGroup` objects. You can find detailed steps [here](../tools/#executing-actions) diff --git a/content/v3/replication/uninstall.md b/content/v3/replication/uninstall.md index 51cdc31976..7ea7935157 100644 --- a/content/v3/replication/uninstall.md +++ b/content/v3/replication/uninstall.md @@ -3,7 +3,7 @@ title: Uninstall linktitle: Uninstall weight: 10 description: > - Dell EMC Container Storage Modules (CSM) for Replication Uninstallation + Dell Container Storage Modules (CSM) for Replication Uninstallation --- This section outlines the uninstallation steps for Container Storage Modules (CSM) for Replication. diff --git a/content/v3/resiliency/_index.md b/content/v3/resiliency/_index.md index 6aaa5551fb..7ccb890831 100644 --- a/content/v3/resiliency/_index.md +++ b/content/v3/resiliency/_index.md @@ -3,21 +3,22 @@ title: "Resiliency" linkTitle: "Resiliency" weight: 6 Description: > - Dell EMC Container Storage Modules (CSM) for Resiliency + Dell Container Storage Modules (CSM) for Resiliency --- -[Container Storage Modules](https://github.com/dell/csm) (CSM) for Resiliency is part of the open-source suite of Kubernetes storage enablers for Dell EMC products. +[Container Storage Modules](https://github.com/dell/csm) (CSM) for Resiliency is part of the open-source suite of Kubernetes storage enablers for Dell products. User applications can have problems if you want their Pods to be resilient to node failure. This is especially true of those deployed with StatefulSets that use PersistentVolumeClaims. Kubernetes guarantees that there will never be two copies of the same StatefulSet Pod running at the same time and accessing storage. Therefore, it does not clean up StatefulSet Pods if the node executing them fails. -For the complete discussion and rationale, go to https://github.com/kubernetes/community and search for the pod-safety.md file (path: contributors/design-proposals/storage/pod-safety.md). +For the complete discussion and rationale, you can read the [pod-safety design proposal](https://github.com/kubernetes/design-proposals-archive/blob/main/storage/pod-safety.md). + For more background on the forced deletion of Pods in a StatefulSet, please visit [Force Delete StatefulSet Pods](https://kubernetes.io/docs/tasks/run-application/force-delete-stateful-set-pod/#:~:text=In%20normal%20operation%20of%20a,1%20are%20alive%20and%20ready). ## CSM for Resiliency High-Level Description CSM for Resiliency is designed to make Kubernetes Applications, including those that utilize persistent storage, more resilient to various failures. The first component of the Resiliency module is a pod monitor that is specifically designed to protect stateful applications from various failures. It is not a standalone application, but rather is deployed as a _sidecar_ to CSI (Container Storage Interface) drivers, in both the driver's controller pods and the driver's node pods. Deploying CSM for Resiliency as a sidecar allows it to make direct requests to the driver through the Unix domain socket that Kubernetes sidecars use to make CSI requests. -Some of the methods CSM for Resiliency invokes in the driver are standard CSI methods, such as NodeUnpublishVolume, NodeUnstageVolume, and ControllerUnpublishVolume. CSM for Resiliency also uses proprietary calls that are not part of the standard CSI specification. Currently, there is only one, ValidateVolumeHostConnectivity that returns information on whether a host is connected to the storage system and/or whether any I/O activity has happened in the recent past from a list of specified volumes. This allows CSM for Resiliency to make more accurate determinations about the state of the system and its persistent volumes. +Some of the methods CSM for Resiliency invokes in the driver are standard CSI methods, such as NodeUnpublishVolume, NodeUnstageVolume, and ControllerUnpublishVolume. CSM for Resiliency also uses proprietary calls that are not part of the standard CSI specification. Currently, there is only one, ValidateVolumeHostConnectivity that returns information on whether a host is connected to the storage system and/or whether any I/O activity has happened in the recent past from a list of specified volumes. This allows CSM for Resiliency to make more accurate determinations about the state of the system and its persistent volumes. CSM for Resiliency is designed to adhere to pod affinity settings of pods. Accordingly, CSM for Resiliency is adapted to and qualified with each CSI driver it is to be used with. Different storage systems have different nuances and characteristics that CSM for Resiliency must take into account. @@ -26,40 +27,40 @@ Accordingly, CSM for Resiliency is adapted to and qualified with each CSI driver CSM for Resiliency provides the following capabilities: {{}} -| Capability | PowerScale | Unity | PowerStore | PowerFlex | PowerMax | -| - | :-: | :-: | :-: | :-: | :-: | -| Detect pod failures for the following failure types - Node failure, K8S Control Plane Network failure, Array I/O Network failure | no | yes | no | yes | no | -| Cleanup pod artifacts from failed nodes | no | yes | no | yes | no | -| Revoke PV access from failed nodes | no | yes | no | yes | no | +| Capability | PowerScale | Unity | PowerStore | PowerFlex | PowerMax | +| --------------------------------------- | :--------: | :---: | :--------: | :-------: | :------: | +| Detect pod failures when: Node failure, K8S Control Plane Network failure, K8S Control Plane failure, Array I/O Network failure | no | yes | no | yes | no | +| Cleanup pod artifacts from failed nodes | no | yes | no | yes | no | +| Revoke PV access from failed nodes | no | yes | no | yes | no | {{
}} ## Supported Operating Systems/Container Orchestrator Platforms {{}} -| COP/OS | Supported Versions | -|-|-| -| Kubernetes | 1.20, 1.21, 1.22 | -| Red Hat OpenShift | 4.8, 4.9 | -| RHEL | 7.x, 8.x | -| CentOS | 7.8, 7.9 | +| COP/OS | Supported Versions | +| ---------- | :----------------: | +| Kubernetes | 1.21, 1.22, 1.23 | +| Red Hat OpenShift | 4.8, 4.9 | +| RHEL | 7.x, 8.x | +| CentOS | 7.8, 7.9 | {{
}} ## Supported Storage Platforms {{}} -| | PowerFlex | Unity | -|---------------|:-------------------:|:----------------:| -| Storage Array | 3.5.x, 3.6.x | 5.0.5, 5.0.6, 5.0.7, 5.1.0 | +| | PowerFlex | Unity | +| ------------- | :----------: | :------------------------: | +| Storage Array | 3.5.x, 3.6.x | 5.0.5, 5.0.6, 5.0.7, 5.1.0, 5.1.2 | {{
}} ## Supported CSI Drivers -CSM for Authorization supports the following CSI drivers and versions. +CSM for Resiliency supports the following CSI drivers and versions. {{}} -| Storage Array | CSI Driver | Supported Versions | -| ------------- | ---------- | ------------------ | -| CSI Driver for Dell EMC PowerFlex | [csi-powerflex](https://github.com/dell/csi-powerflex) | v2.0,v2.1 | -| CSI Driver for Dell EMC Unity | [csi-unity](https://github.com/dell/csi-unity) | v2.0,v2.1 | +| Storage Array | CSI Driver | Supported Versions | +| --------------------------------- | :----------: | :----------------: | +| CSI Driver for Dell PowerFlex | [csi-powerflex](https://github.com/dell/csi-powerflex) | v2.0, v2.1, v2.2 | +| CSI Driver for Dell Unity | [csi-unity](https://github.com/dell/csi-unity) | v2.0, v2.1, v2.2 | {{
}} ### PowerFlex Support @@ -73,10 +74,10 @@ PowerFlex is a highly scalable array that is very well suited to Kubernetes depl ### Unity Support -Dell EMC Unity is targeted for midsized deployments, remote or branch offices, and cost-sensitive mixed workloads. Unity systems are designed for all-Flash, deliver the best value in the market, and are available in purpose-built (all Flash or hybrid Flash), converged deployment options (through VxBlock), and software-defined virtual edition. +Dell Unity is targeted for midsized deployments, remote or branch offices, and cost-sensitive mixed workloads. Unity systems are designed for all-Flash, deliver the best value in the market, and are available in purpose-built (all Flash or hybrid Flash), converged deployment options (through VxBlock), and software-defined virtual edition. * Unity (purpose-built): A modern midrange storage solution, engineered from the groundup to meet market demands for Flash, affordability and incredible simplicity. The Unity Family is available in 12 All Flash models and 12 Hybrid models. -* VxBlock (converged): Unity storage options are also available in Dell EMC VxBlock System 1000. +* VxBlock (converged): Unity storage options are also available in Dell VxBlock System 1000. * UnityVSA (virtual): The Unity Virtual Storage Appliance (VSA) allows the advanced unified storage and data management features of the Unity family to be easily deployed on VMware ESXi servers, for a ‘software defined’ approach. UnityVSA is available in two editions: * Community Edition is a free downloadable 4 TB solution recommended for nonproduction use. * Professional Edition is a licensed subscription-based offering available at capacity levels of 10 TB, 25 TB, and 50 TB. The subscription includes access to online support resources, EMC Secure Remote Services (ESRS), and on-call software- and systems-related support. @@ -108,7 +109,7 @@ The following provisioning types are supported and have been tested: * ReadWriteMany volumes. This may have issues if a node has multiple pods accessing the same volumes. In any case once pod cleanup fences the volumes on a node, they will no longer be available to any pods using those volumes on that node. We will endeavor to support this in the future. -* Multiple instances of the same driver type (for example two CSI driver for Dell EMC PowerFlex deployments.) +* Multiple instances of the same driver type (for example two CSI driver for Dell PowerFlex deployments.) ## Deploying and Managing Applications Protected by CSM for Resiliency diff --git a/content/v3/resiliency/deployment.md b/content/v3/resiliency/deployment.md index 3710f604a1..6da570dfd5 100644 --- a/content/v3/resiliency/deployment.md +++ b/content/v3/resiliency/deployment.md @@ -3,7 +3,7 @@ title: Deployment linktitle: Deployment weight: 3 description: > - Dell EMC Container Storage Modules (CSM) for Resiliency installation + Dell Container Storage Modules (CSM) for Resiliency installation --- CSM for Resiliency is installed as part of the Dell CSI driver installation. The drivers can be installed either by a _helm chart_ or by the _Dell CSI Operator_. Currently, only _Helm chart_ installation is supported. @@ -23,12 +23,13 @@ The drivers that support Helm chart installation allow CSM for Resiliency to be # Enable this feature only after contact support for additional information podmon: enabled: true - image: dellemc/podmon:v1.0.0 + image: dellemc/podmon:v1.1.0 controller: args: - "--csisock=unix:/var/run/csi/csi.sock" - "--labelvalue=csi-vxflexos" - "--mode=controller" + - "--skipArrayConnectionValidation=false" - "--driver-config-params=/vxflexos-config-params/driver-config-params.yaml" node: args: @@ -55,9 +56,9 @@ To install CSM for Resiliency with the driver, the following changes are require | mode | Required | Must be set to "controller" for controller-podmon and "node" for node-podmon. | controller & node | | csisock | Required | This should be left as set in the helm template for the driver. For controller:
`-csisock=unix:/var/run/csi/csi.sock`
For node it will vary depending on the driver's identity:
`-csisock=unix:/var/lib/kubelet/plugins`
`/vxflexos.emc.dell.com/csi_sock` | controller & node | | leaderelection | Required | Boolean value that should be set true for controller and false for node. The default value is true. | controller & node | -| skipArrayConnectionValidation | Optional | Boolean value that if set to true will cause controllerPodCleanup to skip the validation that no I/O is ongoing before cleaning up the pod. | controller | +| skipArrayConnectionValidation | Optional | Boolean value that if set to true will cause controllerPodCleanup to skip the validation that no I/O is ongoing before cleaning up the pod. If set to true will cause controllerPodCleanup on K8S Control Plane failure (kubelet service down). | controller | | labelKey | Optional | String value that sets the label key used to denote pods to be monitored by CSM for Resiliency. It will make life easier if this key is the same for all driver types, and drivers are differentiated by different labelValues (see below). If the label keys are the same across all drivers you can do `kubectl get pods -A -l labelKey` to find all the CSM for Resiliency protected pods. labelKey defaults to "podmon.dellemc.com/driver". | controller & node | -| labelValue | Required | String that sets the value that denotes pods to be monitored by CSM for Resiliency. This must be specific for each driver. Defaults to "csi-vxflexos" for CSI Driver for Dell EMC PowerFlex and "csi-unity" for CSI Driver for Dell EMC Unity | controller & node | +| labelValue | Required | String that sets the value that denotes pods to be monitored by CSM for Resiliency. This must be specific for each driver. Defaults to "csi-vxflexos" for CSI Driver for Dell PowerFlex and "csi-unity" for CSI Driver for Dell Unity | controller & node | | arrayConnectivityPollRate | Optional | The minimum polling rate in seconds to determine if the array has connectivity to a node. Should not be set to less than 5 seconds. See the specific section for each array type for additional guidance. | controller | | arrayConnectivityConnectionLossThreshold | Optional | Gives the number of failed connection polls that will be deemed to indicate array connectivity loss. Should not be set to less than 3. See the specific section for each array type for additional guidance. | controller | | driver-config-params | Required | String that set the path to a file containing configuration parameter(for instance, Log levels) for a driver. | controller & node | @@ -79,6 +80,7 @@ podmon: - "-mode=controller" - "-arrayConnectivityPollRate=5" - "-arrayConnectivityConnectionLossThreshold=3" + - "--skipArrayConnectionValidation=false" - "--driver-config-params=/vxflexos-config-params/driver-config-params.yaml" node: args: @@ -104,6 +106,7 @@ podmon: - "-labelvalue=csi-unity" - "-driverPath=csi-unity.dellemc.com" - "-mode=controller" + - "--skipArrayConnectionValidation=false" - "--driver-config-params=/unity-config/driver-config-params.yaml" node: args: @@ -135,7 +138,7 @@ This is a list of parameters that can be adjusted for CSM for Resiliency: | PODMON_NODE_LOG_LEVEL | String | "debug" |Logging level for the node podmon sidecar. Standard values: 'info', 'error', 'warning', 'debug', 'trace' | | PODMON_ARRAY_CONNECTIVITY_POLL_RATE | Integer (>0) | 15 |An interval in seconds to poll the underlying array | | PODMON_ARRAY_CONNECTIVITY_CONNECTION_LOSS_THRESHOLD | Integer (>0) | 3 |A value representing the number of failed connection poll intervals before marking the array connectivity as lost | -| PODMON_SKIP_ARRAY_CONNECTION_VALIDATION | Boolean | false |Flag to disable the array connectivity check | +| PODMON_SKIP_ARRAY_CONNECTION_VALIDATION | Boolean | false |Flag to disable the array connectivity check, set to true for NoSchedule or NoExecute taint due to K8S Control Plane failure (kubelet failure) | Here is an example of the parameters: diff --git a/content/v3/resiliency/troubleshooting.md b/content/v3/resiliency/troubleshooting.md index 18ddd0593d..af18c13414 100644 --- a/content/v3/resiliency/troubleshooting.md +++ b/content/v3/resiliency/troubleshooting.md @@ -3,7 +3,7 @@ title: Troubleshooting linktitle: Troubleshooting weight: 4 description: > - Dell EMC Container Storage Modules (CSM) for Resiliency - Troubleshooting + Dell Container Storage Modules (CSM) for Resiliency - Troubleshooting --- Some tools have been provided in the [tools](https://github.com/dell/karavi-resiliency/blob/main/tools) directory that will help you understand the system's state and facilitate troubleshooting. @@ -41,4 +41,8 @@ The script collects the following information: * The driver container logs for each of the driver pods. * For each namespace containing protected pods, the recent events logged in that namespace. -After successful execution of the script, it will deposit a file similar to driver.logs.20210319_1407.tgz in the current directory. Please submit that file with any [issues](https://github.com/dell/csm/issues). \ No newline at end of file +After successful execution of the script, it will deposit a file similar to driver.logs.20210319_1407.tgz in the current directory. Please submit that file with any [issues](https://github.com/dell/csm/issues). + +## Actions to take during failure to clean pod resources completely + +The node-podmon cleanup algorithm purposefully will not remove the node taint until all the protected volumes have been cleaned up from the node. This works well if the node fault lasts long enough that controller-podmon can evacuate all the protected pods from the node. However, if the failure is short-lived, and controller-podmon does not clean up all the protected pods on the node, or if for some reason node-podmon cannot clean a pod completely, the taint is left on the node, and manual intervention is required. The required intervention is for the operator to reboot the node, which will ensure that no zombie pods survive. Upon seeing the reboot, node-podmon will then remove the taint. diff --git a/content/v3/resiliency/uninstallation.md b/content/v3/resiliency/uninstallation.md index 51240b443b..57f2905e40 100644 --- a/content/v3/resiliency/uninstallation.md +++ b/content/v3/resiliency/uninstallation.md @@ -3,7 +3,7 @@ title: Uninstallation linktitle: Uninstallation weight: 2 description: > - Dell EMC Container Storage Modules (CSM) for Resiliency Uninstallation + Dell Container Storage Modules (CSM) for Resiliency Uninstallation --- This section outlines the uninstallation steps for Container Storage Modules (CSM) for Resiliency. diff --git a/content/v3/resiliency/upgrade.md b/content/v3/resiliency/upgrade.md index 5473ff849c..4466c77cc6 100644 --- a/content/v3/resiliency/upgrade.md +++ b/content/v3/resiliency/upgrade.md @@ -3,7 +3,7 @@ title: Upgrade linktitle: Upgrade weight: 3 description: > - Dell EMC Container Storage Modules (CSM) for Resiliency upgrade + Dell Container Storage Modules (CSM) for Resiliency upgrade --- CSM for Resiliency can be upgraded as part of the Dell CSI driver upgrade process. The drivers can be upgraded either by a _helm chart_ or by the _Dell CSI Operator_. Currently, only _Helm chart_ upgrade is supported for CSM for Resiliency. diff --git a/content/v3/resiliency/usecases.md b/content/v3/resiliency/usecases.md index bacefca590..daac595325 100644 --- a/content/v3/resiliency/usecases.md +++ b/content/v3/resiliency/usecases.md @@ -36,3 +36,5 @@ CSM for Resiliency's design is focused on detecting the following types of hardw 2. K8S Control Plane Network Failure. Control Plane Network Failure often has the same K8S failure signature (the node is tainted with NoSchedule or NoExecute). However, if there is a separate Array I/O interface, CSM for Resiliency can often detect that the Array I/O Network may be active even though the Control Plane Network is down. 3. Array I/O Network failure is detected by polling the array to determine if the array has a healthy connection to the node. The capabilities to do this vary greatly by array and communication protocol type (Fibre Channel, iSCSI, NFS, NVMe, or PowerFlex SDC IP protocol). By monitoring the Array I/O Network separately from the Control Plane Network, CSM for Resiliency has two different indicators of whether the node is healthy or not. + +4. K8S Control Plane Failure. Control Plane Failure is defined as failure of kubelet in a given node. K8S Control Plane failures are generally discovered by receipt of a Node event with a NoSchedule or NoExecute taint, or detection of such a taint when retrieving the Node via the K8S API. diff --git a/content/v3/snapshots/_index.md b/content/v3/snapshots/_index.md index ddbed0e82a..07be659245 100644 --- a/content/v3/snapshots/_index.md +++ b/content/v3/snapshots/_index.md @@ -3,7 +3,7 @@ title: "Snapshots" linkTitle: "Snapshots" weight: 8 Description: > - Snapshot module of Dell EMC CSI drivers + Snapshot module of Dell CSI drivers --- ## Volume Snapshot Feature diff --git a/content/v3/support/_index.md b/content/v3/support/_index.md index eab3b9b0af..458bd392a5 100644 --- a/content/v3/support/_index.md +++ b/content/v3/support/_index.md @@ -3,7 +3,7 @@ title: "Support" linkTitle: "Support" weight: 11 Description: > - Dell EMC Container Storage Modules (CSM) support + Dell Container Storage Modules (CSM) support --- For all your support needs or to follow the latest ongoing discussions and updates, join our Slack group. Click [Here](http://del.ly/Slack_request) to request your invite. diff --git a/content/v3/troubleshooting/_index.md b/content/v3/troubleshooting/_index.md index fe699d592a..c07a2998c8 100644 --- a/content/v3/troubleshooting/_index.md +++ b/content/v3/troubleshooting/_index.md @@ -3,7 +3,7 @@ title: "Troubleshooting" linkTitle: "Troubleshooting" weight: 10 Description: > - Dell EMC Container Storage Modules (CSM) troubleshooting information + Dell Container Storage Modules (CSM) troubleshooting information --- Troubleshooting links for Container Storage Modules: