-
Notifications
You must be signed in to change notification settings - Fork 1.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Not able to create more than 100 pods on each kind node with podman #2830
Comments
@aojea FYI |
Thanks Numan |
@numansiddique can you check where podman is getting the cgroup limit? I prefer to not tweak defaults if possible, this is the values I've got
|
I tested on fedora 36 with cgroupsv2 enabled
I started a podman container
On the host
If I pass --pids-limit=-1, then pids.max
|
@mheon can you help us here? |
I tested with disablin cgroups v2. With cgroups v1
And with pids-limit=-1
podman version is
|
The 2048 limit is a default for security reasons, IIRC. Th default can be overridden by |
It's a compiled-in default, so as for why you're not seeing it, I'm not sure. Maybe a non-default containers.conf? |
most probably
I don't like the idea of override security defaults, I lean to let users modify their |
I'll test it out by configuring in containers.conf and we can probably close this issue. Thanks |
Closing this issue as a user can override using containers.conf. Thanks for the discussion on this topic. |
What happened:
After deploying kind using podman provider, creating more than 100 pods on each kind worker node fails.
Below error messages are seen in kubectl
ul 14 21:56:42 ovn-worker kubelet[44001]: E0714 21:56:42.556516 44001 kuberuntime_manager.go:738] "killPodWithSyncResult failed" err="failed to "KillPodSandbox" for "03b80ada-b5e7-4cae-8ac7-f892bddcb5d0" with KillPodSandboxError: "rpc error: code = Unknown desc = failed to destroy network for sandbox \"70ad443d7b3e6217ebd3d5a93f1a8b8871dda20ffc3261afa83bc89e6db3ea79\": plugin type=\"ovn-k8s-cni-overlay\" name=\"ovn-kubernetes\" failed (delete): netplugin failed: \"runtime: failed to create new OS thread (have 9 already; errno=11)\\nruntime: may need to increase max user processes (ulimit -u)\\nfatal error: newosproc\\nruntime: failed to create new OS thread (have 10 already; errno=11)\\nruntime: may need to increase max user processes (ulimit -u)\\nfatal error: newosproc\\n\\nruntime stack:\\nruntime.throw({0x1867329?, 0xc0000a3e38?})\\n\\t/usr/local/go/src/runtime/panic.go:992
....
The issue gets resolved if the option - "--pids-limit=-1" is passed when starting the podman kind container - here - https://github.com/kubernetes-sigs/kind/blob/main/pkg/cluster/internal/providers/podman/provision.go#L192
I think its good to address this limitation as we use kind for scale testing to create 250 pods per node.
What you expected to happen:
How to reproduce it (as minimally and precisely as possible):
Anything else we need to know?:
Environment:
kind version
):kubectl version
):docker info
):/etc/os-release
):The text was updated successfully, but these errors were encountered: