-
Notifications
You must be signed in to change notification settings - Fork 1.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
uWSGI running in a Kind cluster on Fedora 33 uses over 8Gi of memory #2175
Comments
You probably have swap enabled, in which case ordinarly kubelet would refuse to run, but kind configures it to continue to run. However this comes at the cost of memory limits not working. We can't fix this in kind. There is however a KEP to allow swap being enabled upstream. |
see also #1963 for a semi-related issue that is non-trivial to resolve. we don't have the cooperation of the necessary downstream components so nodes also cannot be properly restricted either at the node level (versus e.g. using a VM based solution). |
kubernetes/kubernetes#53533 for the upstream issue regarding swap. IIRC if you disable swap memory limits will work. We don't take this approach automatically given it's a global system option with tradeoffs. |
I just tried re-running the test case on a fresh kind cluster after The zombie uwsgi process tried to overallocate and it got killed. Note that I later tried running this exact image on a k3s-based single-node cluster on the same machine, and the uwsgi process did not allocate 8Gi of RAM either (see my attached repo) |
k3s also brings in a different containerd and kubernetes version (and a forked one at that...), but that's good to know. I appreciate the detailed repro repo, thanks! but don't have quick access to a fedora machine at the moment to actually repro on my end. I'm going to be out for the next week and have pretty limited time before then unfortunately. It's very curious that it works on ubuntu 20.10 but not fedora. cc @aojea |
This sounds like this #760
|
Indeed, setting it to For now I'll set it to that on my system. From what I understand, the |
Short history, most ( I would say all) of the issues are application bugs that do the allocation based on the number of file descriptors #760 (comment) In the other hand, if we hardcode a kernel value we don't know if other apps will break, maybe, someone will need a high number of file descriptors and we are capping them ... It will be interesting.to know why fedora goes with this high number too ... |
I did a bit more digging, and found something, which I don't know if it's related. On both of my Fedora and Ubuntu systems, there's a I've also found that the
A I have filed an issue on the |
should we close it then @RedRoserade ? |
Yes, I think this can be closed. Thank you for helping me debug this! |
you are welcome |
Fixes memory consumption of the "uWSGI http 1" process that was rising above 8 GiB on systems like Fedora 37 (locally) and Fedora CoreOS 36 (in the cloud) due to very high file descriptor limits (`fs.nr_open = 1073741816`). See <kubernetes-sigs/kind#2175> and <unbit/uwsgi#2299>. Sets the uWSGI `max-fd` value to 1048576 as per <https://github.com/kubernetes-sigs/kind/pull/1799/files>. If need be, we can make it configurable via Helm chart values later.
Fixes memory consumption of the "uWSGI http 1" process that was rising above 8 GiB on systems like Fedora 37 (locally) and Fedora CoreOS 36 (in the cloud) due to very high file descriptor limits (`fs.nr_open = 1073741816`). See <kubernetes-sigs/kind#2175> and <unbit/uwsgi#2299>. Sets the uWSGI `max-fd` value to 1048576 as per <https://github.com/kubernetes-sigs/kind/pull/1799/files>. If need be, we can make it configurable via Helm chart values later.
What happened:
I have a Docker image for a Python web app that runs with uWSGI. If I run it through
docker run
, everything works fine. However, running the same docker image on a Kind cluster results in the pod, more specifically, uWSGI, to consume >8Gi of memory on boot, even with a minimal example.The same image can run through
docker run
with--memory
set to under512M
without issues.This seems to affect only Fedora 33, the same image running on a Kind cluster on an Ubuntu 20.10 machine, with the same Docker version (20.10.5 community) runs as expected.
After some debugging it seems that it only affects uWSGI running with
--http
, where the extra process for the HTTP server is what's consuming the absurd amounts of memory. If I run it instead with--http-socket
, it runs fine, as it doesn't launch a dedicated HTTP server, but this is not equivalent, and is at most a workaround.When the pod is run with a memory limit set (512Mi), looking at
dmesg -T
shows the OOM killer being triggered on the HTTP server process (i.e., when runninguwsgi --http
).I also tried running this on an OpenShift cluster, with no issues. I also tried destroying and recreating the Kind cluster.
What you expected to happen:
The pod should boot and consume a reasonable amount of memory regardless of operating system.
How to reproduce it (as minimally and precisely as possible):
I created a repository with instructions and dmesg logs, here: https://github.com/RedRoserade/kind-uwsgi-error-example
But, here's some basic instructions (for a manual test):
python:3.8-buster
image, andkubectl exec -it <pod> -- bash
into it.pip
,uwsgi
andflask
.uwsgi --http :8080 --callable <app-variable> --wsgi-file <your-app-file.py>
.curl http://localhost:8080
. On Fedora 33curl
never succeeds, anddmesg -T
shows OOM logs.Anything else we need to know?:
Not that I'm aware of.
Environment:
kind version
): 0.10.0kubectl version
): 1.20.5docker info
): 20.10.5/etc/os-release
): Fedora 33, Kernel 5.11.10-200.fc33.x86_64The text was updated successfully, but these errors were encountered: