-
Notifications
You must be signed in to change notification settings - Fork 43
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
fix: update resource request and limits across containers #212
Conversation
Skipping CI for Draft Pull Request. |
d9b5022
to
7b73e0d
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please add resource limits and requests to the main operator controller pod . See https://github.com/red-hat-storage/ocs-osd-deployer/blob/main/config/manager/manager.yaml for how to do this.
memory: 100Mi | ||
requests: | ||
cpu: 100m | ||
cpu: 30m |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please raise these values. We are well within the cpu limitations.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The metrics-exporter would also need resource definitions.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
updated manager container to use the old values. Added resource limits and requests for metrics exportor
TopolvmNodeMemLimit = "150Mi" | ||
TopolvmNodeCPURequest = "100m" | ||
TopolvmNodeCPULimit = "150m" | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please add separate values for the lvmd. The topolvm-node container uses a lot less. Lvmd can use a request val of 150m and limit of 200m.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
added separate values for lvmd and topolvm-node.
But topolvm-node uses much more when compared to lvmd. For example:
Report for topolvm-node daemonset's Pod topolvm-node-5jb5m between 2022-06-21 12:35:20 +0530 IST and 2022-06-21 12:52:12 +0530 IST
CPU (min|max|avg) seconds: 0.0019 | 0.2090 | 0.1030
MEM (min|max|avg) Mib : 100.8945 | 137.1836 | 120.3202
Report for topolvm-node daemonset's Container lvmd between 2022-06-21 12:35:20 +0530 IST and 2022-06-21 12:52:12 +0530 IST
CPU (min|max|avg) seconds: 0.0000 | 0.1910 | 0.0933
MEM (min|max|avg) Mib : 31.9258 | 62.1250 | 47.2663
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The other results (40 pods, 45 pods) show consistently lower values for the topolvm-node vs the lvmd containers.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
my bad. I was looking at the entire pod. :/
What do you think about the current resource allocation for the lvmd and topolvm-node container ?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The lvmd container looks fine. The topolvm-node cpu numbers can be reduced to 50 for request and 100 for limit.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The lvm-operator manager cpu numbers can be reduced as well - will it work with say 50 and 100?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please raise the mem request for lvm-operator - lets make it at least 50.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
/lgtm
/cherry-pick release-4.11 |
@agarwal-mudit: once the present PR merges, I will cherry-pick it on top of release-4.11 in a new PR and assign it to you. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
memory: 100Mi | ||
requests: | ||
cpu: 100m | ||
cpu: 50m | ||
memory: 20Mi |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please raise this.
@sp98: The following test failed, say
Full PR test history. Your PR dashboard. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here. |
Signed-off-by: Santosh Pillai <[email protected]>
Signed-off-by: Santosh Pillai <[email protected]>
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: iamniting, nbalacha, sp98 The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
@agarwal-mudit: new pull request created: #221 In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
Reference doc
Testing:
Signed-off-by: Santosh Pillai [email protected]