Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

cloudhealth-collector pod gets restarted due to emptydir #119

Merged
merged 1 commit into from
Jun 6, 2024
Merged

cloudhealth-collector pod gets restarted due to emptydir #119

merged 1 commit into from
Jun 6, 2024

Conversation

bbilali
Copy link
Contributor

@bbilali bbilali commented May 17, 2024

Currently there’s no limit for the amount of memory the emptydir can consume, according to kubernetes/kubernetes#119611 this may end up crashing the node as the memory limit is not being considered when using emptydir (the emptydir can consume all the memory of the node, resulting in other processes being killed). Setting the limit to half of the allocated memory should be fine.

Currently there’s no limit for the amount of memory the emptydir can consume, according to kubernetes/kubernetes#119611 this may end up crashing the node as the memory limit is not being considered when using emptydir (the emptydir can consume all the memory of the node, resulting in other processes being killed). Setting the limit to half of the allocated memory should be fine.
@bbilali bbilali requested a review from a team as a code owner May 17, 2024 12:09
@vmwclabot
Copy link
Collaborator

@bbilali, you must sign our contributor license agreement before your changes are merged. Click here to sign the agreement. If you are a VMware employee, read this for further instruction.

@vmwclabot
Copy link
Collaborator

@bbilali, we have received your signed contributor license agreement. The review is usually completed within a week, but may take longer under certain circumstances. Another comment will be added to the pull request to notify you when the merge can proceed.

@kscherme
Copy link
Contributor

Thank you for your contribution! Our team will take a look in the next few days.

Copy link
Contributor

@gm-cht gm-cht left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good one! we never reached the scenario where this tmpfs becomes large by any standard. And all our Nodes are disk backed instead of memory..
Glad we have a fix this scenario! Thank you!

@kscherme
Copy link
Contributor

kscherme commented Jun 3, 2024

Thank you for your patience! We are in contact with the Open Source team regarding your contributor license agreement and will get back to you once it is reviewed

@vmwclabot
Copy link
Collaborator

@bbilali, VMware has approved your signed contributor license agreement.

@kscherme kscherme merged commit 41b452e into CloudHealth:main Jun 6, 2024
@kscherme kscherme mentioned this pull request Jun 6, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants