Skip to content

Commit

Permalink
Fixing broken links (#1016)
Browse files Browse the repository at this point in the history
Co-authored-by: Yamunadevi N Shanmugam <[email protected]>
  • Loading branch information
boyamurthy and shanmydell authored Mar 5, 2024
1 parent 5e2b7a0 commit a2a1233
Show file tree
Hide file tree
Showing 43 changed files with 155 additions and 149 deletions.
2 changes: 1 addition & 1 deletion content/docs/applicationmobility/_index.md
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,7 @@ Description: >

>> NOTE: This tech-preview release is not intended for use in production environment.
>> NOTE: Application Mobility requires a time-based license. See [Deployment](./deployment) for instructions.
>> NOTE: Application Mobility requires a time-based license. See [Deployment](../deployment/helm/modules/installation/applicationmobility/) for instructions.
Container Storage Modules for Application Mobility provide Kubernetes administrators the ability to clone their stateful application workloads and application data to other clusters, either on-premise or in the cloud.

Expand Down
2 changes: 1 addition & 1 deletion content/docs/applicationmobility/troubleshooting.md
Original file line number Diff line number Diff line change
Expand Up @@ -40,7 +40,7 @@ kubectl logs -n $namespace $pod $container > $logFileName

### Why are there error logs about a license?

Application Mobility requires a license in order to function. See the [Deployment](../deployment) instructions for steps to request a license.
Application Mobility requires a license in order to function. See the [Deployment](../../deployment/helm/modules/installation/applicationmobility/) instructions for steps to request a license.

There will be errors in the logs about the license for these cases:
- License does not exist
Expand Down
6 changes: 3 additions & 3 deletions content/docs/csidriver/troubleshooting/powerflex.md
Original file line number Diff line number Diff line change
Expand Up @@ -14,15 +14,15 @@ description: Troubleshooting PowerFlex Driver
|CreateVolume error System <Name> is not configured in the driver | Powerflex name if used for systemID in StorageClass ensure same name is also used in array config systemID |
|Defcontext mount option seems to be ignored, volumes still are not being labeled correctly.|Ensure SElinux is enabled on a worker node, and ensure your container run time manager is properly configured to be utilized with SElinux.|
|Mount options that interact with SElinux are not working (like defcontext).|Check that your container orchestrator is properly configured to work with SElinux.|
|Installation of the driver on Kubernetes v1.25/v1.26/v1.27 fails with the following error: <br />```Error: unable to build kubernetes objects from release manifest: unable to recognize "": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"```|Kubernetes v1.23/v1.24/v1.25 requires v1 version of snapshot CRDs to be created in cluster, see the [Volume Snapshot Requirements](../../installation/helm/powerflex/#optional-volume-snapshot-requirements)|
| The `kubectl logs -n vxflexos vxflexos-controller-* driver` logs show `x509: certificate signed by unknown authority` |A self assigned certificate is used for PowerFlex array. See [certificate validation for PowerFlex Gateway](../../installation/helm/powerflex/#certificate-validation-for-powerflex-gateway-rest-api-calls)|
|Installation of the driver on Kubernetes v1.25/v1.26/v1.27 fails with the following error: <br />```Error: unable to build kubernetes objects from release manifest: unable to recognize "": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"```|Kubernetes v1.23/v1.24/v1.25 requires v1 version of snapshot CRDs to be created in cluster, see the [Volume Snapshot Requirements](../../../deployment/helm/drivers/installation/powerflex/#optional-volume-snapshot-requirements)|
| The `kubectl logs -n vxflexos vxflexos-controller-* driver` logs show `x509: certificate signed by unknown authority` |A self assigned certificate is used for PowerFlex array. See [certificate validation for PowerFlex Gateway](../../../deployment/helm/drivers/installation/powerflex/#certificate-validation-for-powerflex-gateway-rest-api-calls)|
| When you run the command `kubectl apply -f snapclass-v1.yaml`, you get the error `error: unable to recognize "snapclass-v1.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"` | Check to make sure that the v1 snapshotter CRDs are installed, and not the v1beta1 CRDs, which are no longer supported. |
| The controller pod is stuck and producing errors such as" `Failed to watch *v1.VolumeSnapshotContent: failed to list *v1.VolumeSnapshotContent: the server could not find the requested resource (get volumesnapshotcontents.snapshot.storage.k8s.io)` | Make sure that v1 snapshotter CRDs and v1 snapclass are installed, and not v1beta1, which is no longer supported. |
| Driver install or upgrade fails because of an incompatible Kubernetes version, even though the version seems to be within the range of compatibility. For example: `Error: UPGRADE FAILED: chart requires kubeVersion: >= 1.21.0 <= 1.28.0 which is incompatible with Kubernetes V1.21.11-mirantis-1` | If you are using an extended Kubernetes version, see the helm Chart at `helm/csi-vxflexos/Chart.yaml` and use the alternate `kubeVersion` check that is provided in the comments. Note: this is not meant to be used to enable the use of pre-release alpha and beta versions, which is not supported. |
| Volume metrics are missing | Enable [Volume Health Monitoring](../../features/powerflex#volume-health-monitoring) |
| When a node goes down, the block volumes attached to the node cannot be attached to another node | This is a known issue and has been reported at https://github.com/kubernetes-csi/external-attacher/issues/215. Workaround: <br /> 1. Force delete the pod running on the node that went down <br /> 2. Delete the volumeattachment to the node that went down. <br /> Now the volume can be attached to the new node. |
| CSI-PowerFlex volumes cannot mount; are being recognized as multipath devices | CSI-PowerFlex does not support multipath; to fix: <br/> 1. Remove any multipath mapping involving a powerflex volume with `multipath -f <powerflex volume>` <br/> 2. Blacklist CSI-PowerFlex volumes in multipath config file |
| When attempting a driver upgrade, you see: ```spec.fsGroupPolicy: Invalid value: "xxx": field is immutable``` | You cannot upgrade between drivers with different fsGroupPolicies. See [upgrade documentation](../../deployment/helm/drivers/upgradation/drivers/powerflex) for more details |
| When attempting a driver upgrade, you see: ```spec.fsGroupPolicy: Invalid value: "xxx": field is immutable``` | You cannot upgrade between drivers with different fsGroupPolicies. See [upgrade documentation](../../../deployment/helm/drivers/upgrade/powerflex) for more details |
| When accessing ROX mode PVC in OpenShift where the worker nodes are non-root user, you see: ```Permission denied``` while accesing the PVC mount location from the pod. | Set the ```securityContext``` for ROX mode PVC pod as below, as it defines privileges for the pods or containers.<br/><br/>securityContext:<br/>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;runAsUser: 0<br/>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;runAsGroup: 0 |
| After installing version v2.6.0 of the driver using the default `powerflexSdc` image, sdc:3.6.0.6, the vxflexos-node pods are in an `Init:CrashLoopBackOff` state. This issue can happen on hosts that require the SDC to be installed manually. Automatic SDC is only supported on Red Hat CoreOS (RHCOS), RHEL 7.9, RHEL 8.4, RHEL 8.6. | The SDC is already installed. Change the `images.powerflexSdc` value to an empty value in the [values](https://github.com/dell/csi-powerflex/blob/72b27acee7553006cc09df97f85405f58478d2e4/helm/csi-vxflexos/values.yaml#L13) and re-install. |
| After installing version v2.8.0 of the driver using the default `powerflexSdc` image, sdc:3.6.1, the vxflexos-node pods are in an `Init:CrashLoopBackOff` state. This issue can happen on hosts that require the SDC to be installed manually. Automatic SDC is only supported on Red Hat CoreOS (RHCOS), RHEL 7.9, RHEL 8.4, RHEL 8.6. | The SDC is already installed. Change the `images.powerflexSdc` value to an empty value in the [values](https://github.com/dell/csi-powerflex/blob/72b27acee7553006cc09df97f85405f58478d2e4/helm/csi-vxflexos/values.yaml#L13) and re-install. |
Expand Down
2 changes: 1 addition & 1 deletion content/docs/csidriver/troubleshooting/powermax.md
Original file line number Diff line number Diff line change
Expand Up @@ -10,7 +10,7 @@ description: Troubleshooting PowerMax Driver
| `kubectl logs powermax-controller-<xyz> –n <namespace> driver` logs show that the driver failed to connect to the U4P because it could not verify the certificates | Check the powermax-certs secret and ensure it is not empty or it has the valid certificates|
|Driver install or upgrade fails because of an incompatible Kubernetes version, even though the version seems to be within the range of compatibility. For example: Error: UPGRADE FAILED: chart requires kubeVersion: >= 1.23.0 < 1.27.0 which is incompatible with Kubernetes V1.23.11-mirantis-1 | If you are using an extended Kubernetes version, please see the [helm Chart](https://github.com/dell/helm-charts/blob/main/charts/csi-powermax/Chart.yaml) and use the alternate kubeVersion check that is provided in the comments. Please note that this is not meant to be used to enable the use of pre-release alpha and beta versions, which are not supported.|
| When a node goes down, the block volumes attached to the node cannot be attached to another node | 1. Force delete the pod running on the node that went down <br /> 2. Delete the volumeattachment to the node that went down. <br /> Now the volume can be attached to the new node. |
| When attempting a driver upgrade, you see: ```spec.fsGroupPolicy: Invalid value: "xxx": field is immutable``` | You cannot upgrade between drivers with different fsGroupPolicies. See [upgrade documentation](../../deployment/helm/drivers/upgradation/drivers/powermax) for more details |
| When attempting a driver upgrade, you see: ```spec.fsGroupPolicy: Invalid value: "xxx": field is immutable``` | You cannot upgrade between drivers with different fsGroupPolicies. See [upgrade documentation](../../../deployment/helm/drivers/upgrade/powermax) for more details |
| Ater the migration group is in “migrated” state but unable to move to “commit ready” state because the new paths are not being discovered on the cluster nodes.| Run the following commands manually on the cluster nodes `rescan-scsi-bus.sh  -i` `rescan-scsi-bus.sh  -a`|
| `Failed to fetch details for array: 000000000000. [Unauthorized]`" | Please make sure that correct encrypted username and password in secret files are used, also ensure whether the RBAC is enabled for the user |
| `Error looking up volume for idempotence check: Not Found` or `Get Volume step fails for: (000000000000) symID with error (Invalid Response from API)`| Make sure that Unisphere endpoint doesn't end with front slash |
Expand Down
2 changes: 1 addition & 1 deletion content/docs/csidriver/troubleshooting/powerstore.md
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,7 @@ description: Troubleshooting PowerStore Driver
| --- | --- |
| When you run the command `kubectl describe pods powerstore-controller-<suffix> –n csi-powerstore`, the system indicates that the driver image could not be loaded. | - If on Kubernetes, edit the daemon.json file found in the registry location and add `{ "insecure-registries" :[ "hostname.cloudapp.net:5000" ] }` <br> - If on OpenShift, run the command `oc edit image.config.openshift.io/cluster` and add registries to yaml file that is displayed when you run the command.|
| The `kubectl logs -n csi-powerstore powerstore-node-<suffix>` driver logs show that the driver can't connect to PowerStore API. | Check if you've created a secret with correct credentials |
|Installation of the driver on Kubernetes supported versions fails with the following error: <br />```Error: unable to build kubernetes objects from release manifest: unable to recognize "": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"```|Kubernetes v1.21/v1.22/v1.23 requires v1 version of snapshot CRDs to be created in cluster, see the [Volume Snapshot Requirements](../../installation/helm/powerstore/#optional-volume-snapshot-requirements)|
|Installation of the driver on Kubernetes supported versions fails with the following error: <br />```Error: unable to build kubernetes objects from release manifest: unable to recognize "": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"```|Kubernetes v1.21/v1.22/v1.23 requires v1 version of snapshot CRDs to be created in cluster, see the [Volume Snapshot Requirements](../../../deployment/helm/drivers/installation/powerstore/#optional-volume-snapshot-requirements)|
| If PVC is not getting created and getting the following error in PVC description: <br />```failed to provision volume with StorageClass "powerstore-iscsi": rpc error: code = Internal desc = : Unknown error:```| Check if you've created a secret with correct credentials |
| If the NVMeFC pod is not getting created and the host looses the ssh connection, causing the driver pods to go to error state | remove the nvme_tcp module from the host incase of NVMeFC connection |
| When a node goes down, the block volumes attached to the node cannot be attached to another node | 1. Force delete the pod running on the node that went down <br /> 2. Delete the volumeattachment to the node that went down. <br /> Now the volume can be attached to the new node. |
Expand Down
24 changes: 12 additions & 12 deletions content/docs/deployment/_index.md
Original file line number Diff line number Diff line change
Expand Up @@ -36,38 +36,38 @@ The Container Storage Modules and the required CSI Drivers can each be deployed
{{< /card >}}
{{< /cardpane >}}
{{< cardpane >}}
{{< card header="[Dell Container Storage Module for Observability](helm/modules/observability/)"
{{< card header="[Dell Container Storage Module for Observability](helm/modules/installation/observability/)"
footer="Installs Observability Module">}}
CSM for Observability can be deployed either via Helm/CSM operator/CSM for Observability Installer/CSM for Observability Offline Installer
[...More on installation instructions](helm/modules/observability/)
[...More on installation instructions](helm/modules/installation/observability/)
{{< /card >}}
{{< card header="[Dell Container Storage Module for Authorization](helm/modules/authorization/)"
{{< card header="[Dell Container Storage Module for Authorization](helm/modules/installation/authorization/)"
footer="Installs Authorization Module">}}
CSM Authorization can be installed by using the provided Helm v3 charts on Kubernetes platforms or CSM operator.
[...More on installation instructions](helm/modules/authorization/)
[...More on installation instructions](helm/modules/installation/authorization/)
{{< /card >}}
{{< /cardpane >}}
{{< cardpane >}}
{{< card header="[Dell Container Storage Module for Resiliency](helm/modules/resiliency)"
{{< card header="[Dell Container Storage Module for Resiliency](helm/modules/installation/resiliency)"
footer="Installs Resiliency Module">}}
CSI drivers that support Helm chart installation allow CSM for Resiliency to be _optionally_ installed by variables in the chart. It can be updated via _podmon_ block specified in the _values.yaml_. It can be installed via CSM operator as well.
[...More on installation instructions](helm/modules/resiliency)
[...More on installation instructions](helm/modules/installation/resiliency)
{{< /card >}}
{{< card header="[Dell Container Storage Module for Replication](helm/modules/replication)"
{{< card header="[Dell Container Storage Module for Replication](helm/modules/installation/replication)"
footer="Installs Replication Module">}}
Replication module can be installed by installing repctl,Container Storage Modules (CSM) for Replication Controller,CSI driver after enabling replication. It can be installed via CSM operator as well.
[...More on installation instructions](helm/modules/replication)
[...More on installation instructions](helm/modules/installation/replication)
{{< /card >}}
{{< /cardpane >}}
{{< cardpane >}}
{{< card header="[Dell Container Storage Module for Application Mobility](helm/modules/applicationmobility)"
{{< card header="[Dell Container Storage Module for Application Mobility](helm/modules/installation/applicationmobility)"
footer="Installs Application Mobility Module">}}
Application mobility module can be installed via helm charts. This is a tech preview release and it requires a license for installation.
[...More on installation instructions](helm/modules/applicationmobility)
[...More on installation instructions](helm/modules/installation/applicationmobility)
{{< /card >}}
{{< card header="[Dell Container Storage Module for Encryption](helm/modules/encryption)"
{{< card header="[Dell Container Storage Module for Encryption](helm/modules/installation/encryption)"
footer="Installs Encryption Module">}}
Encryption can be optionally installed via the PowerScale CSI driver Helm chart.
[...More on installation instructions](helm/modules/encryption)
[...More on installation instructions](helm/modules/installation/encryption)
{{< /card >}}
{{< /cardpane >}}
4 changes: 2 additions & 2 deletions content/docs/deployment/csminstallationwizard/_index.md
Original file line number Diff line number Diff line change
Expand Up @@ -90,9 +90,9 @@ The [Dell Container Storage Modules Installation Wizard](./src/index.html) is a
5. If Observability is checked in the wizard, refer to [Observability](../csmoperator/modules/observability#post-installation-dependencies) to export metrics to Prometheus and load the Grafana dashboards.
6. If Authorization is checked in the wizard, only the sidecar is enabled. Refer to [Authorization](../../authorization/deployment/helm/) to install and configure the CSM Authorization Proxy Server.
6. If Authorization is checked in the wizard, only the sidecar is enabled. Refer to [Authorization](../../deployment/helm/modules/installation/authorization/) to install and configure the CSM Authorization Proxy Server.
7. If Replication is checked in the wizard, refer to [Replication](../../replication/deployment/) on configuring communication between Kubernetes clusters.
7. If Replication is checked in the wizard, refer to [Replication](../../deployment/helm/modules/installation/replication/) on configuring communication between Kubernetes clusters.
8. If your Kubernetes distribution doesn't have the Volume Snapshot feature enabled, refer to [this section](../../snapshots) to install the Volume Snapshot CRDs and the default snapshot controller.
Expand Down
2 changes: 1 addition & 1 deletion content/docs/deployment/csmoperator/drivers/powerscale.md
Original file line number Diff line number Diff line change
Expand Up @@ -100,7 +100,7 @@ kubectl get csm --all-namespaces
The driver will continue to use previous values in case of an error found in the YAML file.

3. Create isilon-certs-n secret.
Please refer [this section](../../../../csidriver/installation/helm/isilon/#certificate-validation-for-onefs-rest-api-calls) for creating cert-secrets.
Please refer [this section](../../../../deployment/helm/drivers/installation/isilon/#certificate-validation-for-onefs-rest-api-calls) for creating cert-secrets.

If certificate validation is skipped, empty secret must be created. To create an empty secret. Ex: empty-secret.yaml

Expand Down
6 changes: 3 additions & 3 deletions content/docs/deployment/csmoperator/modules/authorization.md
Original file line number Diff line number Diff line change
Expand Up @@ -128,12 +128,12 @@ Once the Authorization CR is created, you can verify the installation as mention

### Install Karavictl

Follow the instructions available in CSM Authorization for [Installing karavictl](../../../helm/modules/authorization/#install-karavictl).
Follow the instructions available in CSM Authorization for [Installing karavictl](../../../helm/modules/installation/authorization/#install-karavictl).

### Configure the CSM Authorization Proxy Server

Follow the instructions available in CSM Authorization for [Configuring the CSM Authorization Proxy Server](../../../helm/modules/authorization/#configuring-the-csm-authorization-proxy-server).
Follow the instructions available in CSM Authorization for [Configuring the CSM Authorization Proxy Server](../../../helm/modules/installation/authorization/#configuring-the-csm-authorization-proxy-server).

### Configure a Dell CSI Driver with CSM Authorization

Follow the instructions available in CSM Authorization for [Configuring a Dell CSI Driver with CSM for Authorization](../../../helm/modules/authorization/#configuring-a-dell-csi-driver-with-csm-for-authorization).
Follow the instructions available in CSM Authorization for [Configuring a Dell CSI Driver with CSM for Authorization](../../../helm/modules/installation/authorization/#configuring-a-dell-csi-driver-with-csm-for-authorization).
Loading

0 comments on commit a2a1233

Please sign in to comment.