-
Notifications
You must be signed in to change notification settings - Fork 807
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
"attachable-volumes-aws-ebs" not being set on nodes even when --volume-attach-limit is used... #1258
Comments
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs. This bot triages issues and PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
@diranged it also reports always zero in
This is a mess but authors say that you should look only at
|
The Kubelet populates a |
@torredil I've set volume limit on the csi driver to 100 and tested on an EKS 1.23 cluster with a t3.xlarge node and I couldn't attach more than 23 volumes (not even 25). describe node shows 25:
describe csinode shows 100
I'm using aws-ebs-csi-driver-2.10.1 helm chart and node is running v1.23.9-eks-ba74326. |
@sok1234 The My understanding is that t3.xlarge instances support a maximum of 28 attachments (volumes and ENIs). How many ENIs are attached to your instance? |
@torredil the instance has 4 ENIs. If I understand correctly, the total available attachments are calculated as: (Total - [ENIs] - [root EBS volume]). That gives 23 volumes in this case which is exactly what I get. It looks like the instance type is limiting the attachable volumes which makes the That's really bad because even though we can have up to 58 pods on this instance type we can only attach volumes to half of them. |
@sok1234 Your calculation is correct and I would expect to see an attachment limit of 23 (as reported) given the instance has 4 ENIs. The max number of attachments is limited by the instance type. By default, the CSI driver parses the instance type and decides the volume limit, but this is only a rough approximation and not accurate in some cases. Specifying the volume limit via |
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs. This bot triages issues and PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle rotten |
So the number of ENIs attached to a Node only play a role in only actively reducing the amount of attachable volumes out from a "default" total , or do they affect the total number? Because while trying to tackle this issue and searching around google, I think I've found some answers that kind of imply the latter. On the other hand I might have this wrong - I am still trying to understand what can be done to overcome this limitation. |
@sotiriougeorge I've replied to your question on #1163 but for completeness the following calculation defines the max attachable volumes.
So for a Nitro instance |
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs. This bot triages issues according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /close not-planned |
@k8s-triage-robot: Closing this issue, marking it as "Not Planned". In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
/kind bug
What happened?
We have noticed that even though we're setting the
--volume-attach-limit=15
flag on ourebs-csi-node
daemonsets, our instances are reporting39
attachable EBS volumes. Shouldn't setting--volume-attach-limit=15
turn around and tell Kubernetes that the limit is 15 for that particular node?What you expected to happen?
I expect to see
.status.allocatable,"attachable-volumes-aws-ebs"
report 15... not 39:How to reproduce it (as minimally and precisely as possible)?
We are running EKS 1.22, Bottlerocket 1.72 nodes, and using the 2.6.7 helm chart for the aws-ebs-csi drivers..
Anything else we need to know?:
Logs
Example pod config:
Example Node
Environment
Kubernetes: v1.22.6-eks-14c7a48
The text was updated successfully, but these errors were encountered: