-
Notifications
You must be signed in to change notification settings - Fork 2.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
too many mount/umount syscalls #2532
Comments
related issues: |
Yes, if you run |
We could have mitigated the situation by remove exec probe currently. Any improve could do in runc side? In systemd, there are trying to add rate limit for sd_event. |
Hello, I also have high CPU usage by /lib/systemd/systemd --user and /sbin/init processes. I tried different perf command, but haven't seen such clear view CPU usage. Can you tell me how to record and view such CPU usage screen? Thank you. |
Hello @cloud-66, I think you need debug symbol for this, please following the instruction from https://wiki.ubuntu.com/Debug%20Symbol%20Packages Once you have done, perform these commands,
|
Here's my finding repasted from kubernetes/kubernetes#82440
It's reasonable to expect some syscall activity every exec probe period. It's also reasonable to expect a reduction in the number of syscalls per minute when exec probe period is longer (e.g. 30 seconds instead of 5 seconds). However, this is not a case. A mere presence of an exec probe causes a lot of syscalls and overall increased load on CPU, regardless of the exec probe period. I suggest reading through kubernetes/kubernetes#82440 where I provided a lot of data on the problem. I don't know if it's a Kubernetes or a Runc bug. EDIT: Thanks for @cyphar for explaining it's not a Runc issue below! ❤️ |
(Copying this comment from the Kubernetes issue.) Taking a quick look, kubernetes/kubernetes#82440 (comment) is describing Kubernetes effectively spamming On the |
Not really. The mount setup was actually added to avoid other issues (we used to copy the binary rather than ro-mount it, which caused container and system memory usage to increase by 10MB each time you attached or configured a container). I am working on some kernel patches to remove the need for these protections (by blocking chained opens through a read-only fd), but they'd need to be merged first and then you'd need to upgrade as well. |
@cyphar Good to hear that! |
I was about to post a program to create a bind-mount of a |
So, this is mostly not a problem in runc per se, but a problem in systemd which re-reads the proverbial mountinfo on every mount/umount. The ultimate fix to that belongs in the kernel, and some work is being done in that direction (see https://lkml.org/lkml/2020/3/18/569) but we're not quite there yet. In the meantime, systemd workarounds are being worked on -- if someone is interested to look into that, start from here: systemd/systemd#15464 |
Sorry I haven't realized the systemd issue was mentioned here (right in the issue description). Still, reducing runc overhead would be a good thing to do. |
Big thanks to all of you for the hard work you've put into this. |
I'm not sure this is the right place to discuss. We are observing high cpu usage by systemd init process[1]. After some digging, it is possible caused by many mount/umount syscalls. Is it because runc was be executed?[3] At same time there is no new pod be scheduled to this instance or be deleted. What is the purpose runc was executed in this case?
Thanks!
[1] high cpu usage init process.
[2]
[3]
nsenter was import from https://github.com/opencontainers/runc/blob/master/init.go#L10
mount syscall executed from https://github.com/opencontainers/runc/blob/master/libcontainer/nsenter/cloned_binary.c#L402
Additional information:
OS: ubuntu 18.04
GKE 1.14
The text was updated successfully, but these errors were encountered: