From 7f41b36ecaa6fe91b6cba3b6016fd4a3dfa7aa97 Mon Sep 17 00:00:00 2001 From: Martin Schuppert Date: Fri, 5 Jul 2024 15:34:47 +0200 Subject: [PATCH] Bump memory limits for the operator controller-manager pod Starting OCP 4.16, the current set limits need to be bumped to prevent them getting OOMKilled, e.g.: NAME CPU(cores) MEMORY(bytes) swift-operator-controller-manager-6764c568ff-r4mzl 2m 468Mi telemetry-operator-controller-manager-7c4fd577b4-9nnvq 2m 482Mi The reason for this is probably due to the move to cgroups v2. OCP 4.16 release notes: ~~~ Beginning with OpenShift Container Platform 4.16, Control Groups version 2 (cgroup v2), also known as cgroup2 or cgroupsv2, is enabled by default for all new deployments, even when performance profiles are present.Since OpenShift Container Platform 4.14, cgroups v2 has been the default, but the performance profile feature required the use of cgroups v1. This issue has been resolved. ~~~ Upstream memory increase discussion: https://github.com/kubernetes/kubernetes/issues/118916 While im mentiones its only node memory stats thats are wrong and pod stats are correct, its probably related. Resolves: https://issues.redhat.com/browse/OSPRH-8379 Signed-off-by: Martin Schuppert --- config/manager/manager.yaml | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/config/manager/manager.yaml b/config/manager/manager.yaml index 8f7314e4..2eaeab29 100644 --- a/config/manager/manager.yaml +++ b/config/manager/manager.yaml @@ -65,9 +65,9 @@ spec: resources: limits: cpu: 500m - memory: 256Mi + memory: 768Mi requests: cpu: 10m - memory: 128Mi + memory: 512Mi serviceAccountName: controller-manager terminationGracePeriodSeconds: 10