You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
$ cat /etc/redhat-release
Red Hat Enterprise Linux release 8.6 (Ootpa)
$ kind --version
kind version 0.14.0
$ podman --version
podman version 4.0.2
Running a JBoss server within a container fails to create init thread:
2022/08/24 09:37:30.000778 ERROR <org.jboss.msc.service.fail ServerService Thread Pool -- 58> MSC000001: Failed to start service jboss.deployment.unit."xxxxx.war".undertow-deployment:
org.jboss.msc.service.StartException in service jboss.deployment.unit."xxxxx.war".undertow-deployment:
java.lang.RuntimeException:
org.springframework.beans.factory.BeanCreationException:
Error creating bean with name 'mapTaskExecutorPostInit': Invocation of init method failed; nested exception is java.lang.OutOfMemoryError: unable to create native thread: possibly out of memory or proc ess/resource limits reached
at [email protected]//org.wildfly.extension.undertow.deployment.UndertowDeploymentService$1.run(UndertowDeploymentService.java:81)
at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515)
at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
at [email protected]//org.jboss.threads.ContextClassLoaderSavingRunnable.run(ContextClassLoaderSavingRunnable.java:35)
at [email protected]//org.jboss.threads.EnhancedQueueExecutor.safeRun(EnhancedQueueExecutor.java:1990)
at [email protected]//org.jboss.threads.EnhancedQueueExecutor$ThreadBody.doRunTask(EnhancedQueueExecutor.java:1486)
at [email protected]//org.jboss.threads.EnhancedQueueExecutor$ThreadBody.run(EnhancedQueueExecutor.java:1377)
at java.base/java.lang.Thread.run(Thread.java:829)
at [email protected]//org.jboss.threads.JBossThread.run(JBossThread.java:513)
Caused by: java.lang.RuntimeException: org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'mapTaskExecutorPostInit': Invocation of init method failed; nested exception is java.lang.OutOfMemoryError: unable to create native thread: possibly out of memory or process/resource limits reached
at [email protected]//io.undertow.servlet.core.DeploymentManagerImpl.deploy(DeploymentManagerImpl.java:257)
at [email protected]//org.wildfly.extension.undertow.deployment.UndertowDeploymentService.startContext(UndertowDeploymentService.java:96)
at [email protected]//org.wildfly.extension.undertow.deployment.UndertowDeploymentService$1.run(UndertowDeploymentService.java:78)
After investigation, I ended up by identifying the root cause: podman run uses a default pid.limit when it's not specified (as per documentation, this limit is set to 4096, in my case it is 2048). This can be verified by checking cgroup file sytem:
/sys/fs/cgroup/user.slice/user-[your default user id].slice/user@[your default user id].service/user.slice/libpod-[kind container's Id key].scope/pids.max
#Example:
/sys/fs/cgroup/user.slice/user-47923.slice/[email protected]/user.slice/libpod-92cde86facd269865f242d0f2ea724f31f5791e21082beb6978a7e0a6bef3d04.scope/pids.max
I tested the following change and it fixed my issue:
diff --git a/pkg/cluster/internal/providers/podman/provision.go b/pkg/cluster/internal/providers/podman/provision.go
index 0935b48d..fd898be5 100644
--- a/pkg/cluster/internal/providers/podman/provision.go
+++ b/pkg/cluster/internal/providers/podman/provision.go
@@ -136,6 +136,7 @@ func commonArgs(cfg *config.Cluster, networkName string, nodeNames []string) ([]
"--label", fmt.Sprintf("%s=%s", clusterLabelKey, cfg.Name),
// specify container implementation to systemd
"-e", "container=podman",
+ "--pids-limit", "-1", // Set to -1 to have unlimited pids for the container
}
// enable IPv6 if necessary
The limit went from 2048 value to the default max task value set by the system, in my case 411787
I believe we're talking about the same issue as described here: #2830 (comment)
There is also a quick recipe on how it can be mitigated without the change in the Kind i.e. via setting the explicit limit(or lifting it entirely) in containers.conf.
The sf-operator can not be deployed when there is already deployed
operator by somone. The new pods got an error that some limits are
reached.
This commit is setting the podman pids_limit to unlimited [1].
[1] kubernetes-sigs/kind#2896 (comment)
Change-Id: Ie314ed4e60ea127bca1af7e9d60548a5346fa203
After investigation, I ended up by identifying the root cause: podman run uses a default pid.limit when it's not specified (as per documentation, this limit is set to 4096, in my case it is 2048). This can be verified by checking cgroup file sytem:
I tested the following change and it fixed my issue:
The limit went from 2048 value to the default max task value set by the system, in my case 411787
The text was updated successfully, but these errors were encountered: