You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi, first of all for this super useful plugin, it's exactly what I was missing for some time!
In order to reduce the memory footprint of apps on my cluster I'd like to tune memory requests and like to propose an option that calculates the mem metrics not according to the total node memory, but according to the requests/limits (same should be possible with CPU metrics, but my focus is on memory).
Right now kube-capacity shows 2% mem usage of the total node memory:
❯ kubectl resource-capacity --pods --util --sort mem.util | grep -E '(NODE|kustom)'
NODE NAMESPACE POD CPU REQUESTS CPU LIMITS CPU UTIL MEMORY REQUESTS MEMORY LIMITS MEMORY UTIL
bitrigger flux-system kustomize-controller-7dd58878b8-7jmnb 100m (2%) 0Mi (0%) 3m (0%) 64Mi (3%) 1024Mi (51%) 49Mi (2%)
What I'd be interested in is the percentage of i.e. the mem requests (which is what k9s shows in the default pod list), which is actually 77%:
│ NAMESPACE↑ NAME PF READY RESTARTS STATUS CPU MEM %CPU/R %CPU/L %MEM/R %MEM/L IP NODE AGE │
│ flux-system kustomize-controller-7dd58878b8-7jmnb ● 1/1 0 Running 3 50 3 n/a 78 4 10.42.0.38 bitrigger 3h4m
So a flag could be --percentage=[node|req|limit] which could apply to both CPU and mem metrics.
The text was updated successfully, but these errors were encountered:
Hi, first of all for this super useful plugin, it's exactly what I was missing for some time!
In order to reduce the memory footprint of apps on my cluster I'd like to tune memory requests and like to propose an option that calculates the mem metrics not according to the total node memory, but according to the requests/limits (same should be possible with CPU metrics, but my focus is on memory).
Right now kube-capacity shows
2%
mem usage of the total node memory:What I'd be interested in is the percentage of i.e. the mem requests (which is what k9s shows in the default pod list), which is actually
77%
:So a flag could be
--percentage=[node|req|limit]
which could apply to both CPU and mem metrics.The text was updated successfully, but these errors were encountered: