Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Does KIND node share the host cpu and memory #2805

Closed
morningspace opened this issue Jun 17, 2022 · 5 comments
Closed

Does KIND node share the host cpu and memory #2805

morningspace opened this issue Jun 17, 2022 · 5 comments
Labels
kind/support Categorizes issue or PR as a support question.

Comments

@morningspace
Copy link

morningspace commented Jun 17, 2022

I just notice it looks the KIND nodes share the allocatable cpu and memory resource at the host level because when I describe the nodes, I can see each node has the allocatable cpu and memory equal to the host level allocatable cpu and memory, e.g.: say I have a machine w/ 16c 64G, all KIND nodes will have 16c 64G resource allocatable.

Because of this, the cpu and memory usage reported by metrics-server (installed after KIND is launched) seems to be misleading. The overall capacity is the actual host capacity multiplied by the node number. The more number of nodes I have, the larger value it is.

Wonder if this is the default behavior that is configurable or due to the nature of KIND mechanism, node simulated based on docker container?

@morningspace morningspace added the kind/support Categorizes issue or PR as a support question. label Jun 17, 2022
@BenTheElder
Copy link
Member

The latter, it may be possible to fix someday but it's more or less due to how kind works.

Multi node is really only suitable for testing some Kubernetes internals around rolling update behavior. It may rarely also be necessary for application testing for similar reasons.
But it is not for resource isolation, and using a single node is otherwise better.

@morningspace
Copy link
Author

Thanks @BenTheElder for your prompt reply, it looks #1524 also mentioned a similar use case where it has the workaround by modifying kubeadmConfigPatches. Wonder if that could be part of FAQ in KIND docs.

@BenTheElder
Copy link
Member

That approach is a bit of a hack and I haven't yet been able to vouch for its effectiveness.

In general I think if you're trying to limit resources like this, you probably need VM or physical nodes at the moment.

@aojea
Copy link
Contributor

aojea commented Jun 17, 2022

The solution there limits the resources but nodes still see the whole resources on the node, you have to fake the procfs folder if you want to have different resources per node

#877 (comment)

@morningspace
Copy link
Author

Thanks @BenTheElder and @aojea , with that, I'm going to close this issue as my initial question has been answered.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/support Categorizes issue or PR as a support question.
Projects
None yet
Development

No branches or pull requests

3 participants