-
Notifications
You must be signed in to change notification settings - Fork 1.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Faster Python, beyond semantic interposition #575
Comments
I ran some more benchmarks; same basic results though, this image is the slowest: https://pythonspeed.com/articles/faster-python/ |
Hi, I found your page very useful. I am actually a nodejs dev, but I am currently optimizing our python docker images. We use python 3.7 Should we create a custom ubuntu + python 3.7 with semantic interposition and lto for maximum performance? Our python services are anyway fucking huge (3-5 GBy, don't ask ;)) and are computational heavy. Every percent more performance is recognizable. |
The performance hit in docker comes imho from seccomp https://stackoverflow.com/questions/60840320/docker-50-performance-hit-on-cpu-intensive-code |
Yeah deactivating seccomp results in a massive speed boost. BUt i guess it is not the idea to deactivate seccomp ;) I read an article, that in linux 5.11 seccomp got optimized reducing some lookup overhead. So it is also relevant in your performance tests, on which system you run your tests. |
I extra upgraded my machine to Linux 5.11. The seccomp performance hit does not change
|
Further research makes me believe, that docker has a general seccomp performance hit. I modified the default seccomp profile to SCMP_ACT_KILL and it did not kill the service. So I assume that your benchmark never hits a seccomp restriction. moby/moby#41389 Even when i make an all allow seccomp profile results in a performance hit. So only by using seccomp we have the performance issues. So we have here plain overhead, which is either in linux kernel or in docker. |
I ran into a similar issue with seccomp and docker before and in my case the answer turned out to be that starting an application with seccomp activates not only seccomp but also a certain meltdown mitigation which was deactivated by default in my kernel. See here: https://wiki.ubuntu.com/SecurityTeam/KnowledgeBase/SpectreAndMeltdown/MitigationControls and look for |
See https://bugs.python.org/issue38980 for --enable-shared performance. |
#501 has a useful suggestion for speeding up Python by ~20%. After that's done, it's actually possible to do better.
Host is Fedora 33. All tests were run with Python 3.9.
On host:
Running inside Docker 20.04 (cgroups v2 enabled):
I am mystified why things are so much slower inside Docker. Some of this is clearly not because of the image, but the runtime. But notice the Ubuntu image is definitely faster.
With podman:
Note that the Anaconda (default Conda) Python 3.9 does not appear faster, it's specifically whatever Conda-Forge does. I am trying to figure that out.
The text was updated successfully, but these errors were encountered: