-
Notifications
You must be signed in to change notification settings - Fork 35
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Customize MAX_VOLUMES_PER_NODE, allowing for more than 59 disks per VM #97
Conversation
@@ -117,7 +117,15 @@ spec: | |||
- name: CSI_ENDPOINT | |||
value: unix:///csi/csi.sock | |||
- name: MAX_VOLUMES_PER_NODE | |||
value: "59" # Maximum number of volumes that controller can publish to the node. If value is not set or zero Kubernetes decide how many volumes can be published by the controller to the node. | |||
{{- if .Values.maxPvscsiTargetsPerVm.enabled }} | |||
{{- if gt .Values.csiNode.maxVolumesPerNode 255 }} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is a string. You can't call gt
on a string.
{{- if gt .Values.csiNode.maxVolumesPerNode 255 }} | |
{{- if gt (int .Values.csiNode.maxVolumesPerNode) 255 }} |
Since we're doing range checks here, do you also want to ensure that it's a valid int, and greater than 0 or some other lower bound?
As implemented, setting a lower than 1 |
@@ -117,7 +117,15 @@ spec: | |||
- name: CSI_ENDPOINT | |||
value: unix:///csi/csi.sock | |||
- name: MAX_VOLUMES_PER_NODE | |||
value: "59" # Maximum number of volumes that controller can publish to the node. If value is not set or zero Kubernetes decide how many volumes can be published by the controller to the node. | |||
{{- if and (.Values.maxPvscsiTargetsPerVm.enabled) (gt (int .Values.csiNode.maxVolumesPerNode) 0 ) }} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This appears to require both vsphere 8, and a feature gate on the CSI controller. Instead of removing the comment here, can you update it to reflect these restrictions?
https://github.com/kubernetes-sigs/vsphere-csi-driver/blob/c4513d794f3aa891e30a7e9d0a2306b5868a2646/pkg/csi/service/node.go#L37-L44
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actually the requirement seems to be using vSphere 7 and up, although I was not able to test it in versions prior to vSphere 8. In addition, the VsanVcMgmtConfig.xml
file might be in a different folder, depending on vCenter version: the linked issue mentions /usr/lib/vmware-vpx/vsan-health/
, but I found it in /usr/lib/vmware-vsan
. I will update the comment to reflect the feature gate restriction in vCenter, but I was not able to identify the exact version of vSphere which this feature is available from (suspecting vSphere 7 update 3).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't know where this is authoritatively documented, I was just going off of the code comment and PR at kubernetes-sigs/vsphere-csi-driver#1696
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sure, I was trying to extract the information from the official vSphere documentation, but apparently this is not yet documented for CNS, although it states that 64 disks per PvSCSI controller are supported since vSphere 6.7 (https://configmax.broadcom.com/guest?vmwareproduct=vSphere&release=vSphere%206.7&categories=1-0).
The comment at kubernetes-sigs/vsphere-csi-driver#2800 (comment) led me to believe it was possible with vSphere 7, since the file path @divyenpatel mentions only exists in vCenter appliance version 7.
Nevertheless, since we know for sure it works in vSphere 8, I will update the comments accordingly.
Pull Request Checklist
Types of Change
In order to allow the use of more than 59 CNS volumes in a node, it is both necessary to:
maxPvscsiTargetsPerVM.enabled
totrue
MAX_VOLUMES_PER_NODE
to be greater than59
(up to255
supported)The latter required manual intervention in vsphere-csi-node DaemonSet. With this pull request it is possible to set the
csiNode.maxVolumesPerNode
parameter to the desired value, up to255
(maximum supported by vSphere currently) if, and only if,maxPvscsiTargetsPerVM.enabled
is set totrue
.Linked Issues
(kubernetes-sigs/vsphere-csi-driver#2800)
Additional Notes
Please note that it is also necessary to adjust a vCenter config file (as per the linked issue above):
/usr/lib/vmware-vsan/VsanVcMgmtConfig.xml
and set the parameterpvscsiCtrlr256DiskSupportEnabled
totrue
vsan-health
using the commandservice-control --restart vmware-vsan-health
The procedure was tested using vSphere 8 update 3. After restarting
vsan-health
service and setting the correct parameters in vSphere CSI Driver, it was possible to attach more than 59 CNS disks in a single VM.