Skip to content

Commit

Permalink
Update documentation for NVMeTCP support for CSI Powermax
Browse files Browse the repository at this point in the history
  • Loading branch information
delldubey committed Jun 25, 2024
1 parent 23c29b6 commit 98f0738
Show file tree
Hide file tree
Showing 4 changed files with 130 additions and 81 deletions.
3 changes: 2 additions & 1 deletion content/docs/csidriver/_index.md
Original file line number Diff line number Diff line change
Expand Up @@ -41,7 +41,7 @@ The CSI Drivers by Dell implement an interface between [CSI](https://kubernetes-
|---------------|:----------------:|:------------------:|:----------------:|:----------------:|:----------------:|
| Fibre Channel | yes | N/A | yes | N/A | yes |
| iSCSI | yes | N/A | yes | N/A | yes |
| NVMeTCP | N/A | N/A | N/A | N/A | yes |
| NVMeTCP | yes | N/A | N/A | N/A | yes |
| NVMeFC | N/A | N/A | N/A | N/A | yes |
| NFS | yes - SDNAS only (not eNAS) | yes | yes | yes | yes |
| Other | N/A | ScaleIO protocol | N/A | N/A | N/A |
Expand All @@ -50,3 +50,4 @@ The CSI Drivers by Dell implement an interface between [CSI](https://kubernetes-
| Platform-specific configurable settings | Service Level selection<br>iSCSI CHAP | - | Host IO Limit<br>Tiering Policy<br>NFS Host IO size<br>Snapshot Retention duration | Access Zone<br>NFS version (3 or 4);Configurable Export IPs | iSCSI CHAP |
| Auto RDM(vSphere) | Yes(over FC) | N/A | N/A | N/A | N/A |
{{</table>}}

10 changes: 10 additions & 0 deletions content/docs/csidriver/features/powermax.md
Original file line number Diff line number Diff line change
Expand Up @@ -636,3 +636,13 @@ This feature is also supported for limiting the volume provisioning on Kubernete
>**NOTE:** <br>The default value of `maxPowerMaxVolumesPerNode` is 0. <br>If `maxPowerMaxVolumesPerNode` is set to zero, then CO shall decide how many volumes of this type can be published by the controller to the node.<br><br>The volume limit specified to `maxPowerMaxVolumesPerNode` attribute is applicable to all the nodes in the cluster for which node label `max-powermax-volumes-per-node` is not set.
<br>Supported maximum number of RDM Volumes per VM is 60 as per the limitations. <br>If the value is set both by node label and values.yaml file then node label value will get the precedence and user has to remove the node label in order to reflect the values.yaml value.

## NVMe/TCP Support

The CSI Driver for Dell PowerMax supports NVMeTCP from v2.11.0. To enable NVMe/TCP provisioning, blockProtocol in settings file should be specified as NVMETCP.

**Limitations**<br>
These are the CSM modules not supported with NVMeTCP protocol:
- CSM Authorization
- CSM Observability
- CSM Application Mobility
- Metro Replication
19 changes: 19 additions & 0 deletions content/docs/deployment/csmoperator/drivers/powermax.md
Original file line number Diff line number Diff line change
Expand Up @@ -15,6 +15,7 @@ The CSI Driver for Dell PowerMax can create PVC with different storage protocols
* direct Fiber Channel
* direct iSCSI
* NFS
* NVMeTCP
* Fiber Channel via VMware Raw Device Mapping
In most cases, you will use one protocol only; therefore you should comply with the according prerequisites and not the others.

Expand All @@ -33,6 +34,24 @@ CSI Driver for Dell PowerMax supports Fibre Channel communication. Ensure that t
- Ensure that the HBA WWNs (initiators) appear on the list of initiators that are logged into the array.
- If the number of volumes that will be published to nodes is high, then configure the maximum number of LUNs for your HBAs on each node. See the appropriate HBA document to configure the maximum number of LUNs.

### NVMeTCP requirements
If you want to use the protocol, set up the NVMe initiators as follows:
- Setup on Array
- When creating the NVMe endpoints, update the endpoint name as ```<nqn.1988-11.com.dell:<sid><dir><port>>```
- Setup on Host
- The driver requires NVMe management command-line interface (nvme-cli) to use configure, edit, view or start the NVMe client and target. The nvme-cli utility provides a command-line and interactive shell option. The NVMe CLI tool is installed in the host using the below command.
```bash
sudo apt install nvme-cli
```
**Requirements for NVMeTCP**
- Modules including the nvme, nvme_core, nvme_fabrics, and nvme_tcp are required for using NVMe over Fabrics using TCP. Load the NVMe and NVMe-OF Modules using the below commands:
```bash
modprobe nvme
modprobe nvme_tcp
```
- The NVMe modules may not be available after a node reboot. Loading the modules at startup is recommended.
- Generate and update the _/etc/nvme/hostnqn_ with hostNQN details.

### iSCSI Requirements

The CSI Driver for Dell PowerMax supports iSCSI connectivity. These requirements are applicable for the nodes that use iSCSI initiator to connect to the PowerMax arrays.
Expand Down
Loading

0 comments on commit 98f0738

Please sign in to comment.