-
Notifications
You must be signed in to change notification settings - Fork 86
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Review / revisit systemd unit files #73
Comments
ping @andrewhsu @seemethere PTAL |
|
The Looks like there's no common flag to set these all to "unlimited"
|
@thaJeztah is this still an issue? |
Actually, yes, this is still an issue: forgot I opened this one, but this is what @kolyshkin ran into while working on these for Docker EE We should also review the other options, and see if we should use those |
This might not be the right place to ask, but im currently facing some problems with my nodejs app in Docker. According to this article the way to enable core dumps is by setting Regarding to Docker is it safe to turn Im afraid of :
|
Looks like you want to enable this for a specific container; in that case, use the |
sorry no mentioned it earlier, im on swarm mode sir. |
Nope, that option isn't available (yet?) for swarm mode; the problem is not reproducible when starting a container manually? ( |
yep it is not reproducible in single docker run, yep I can disable that feature for every container, however i'm just afraid of
what are the performance problems (kernel overhead)? |
performance will be that the kernel enables accounting when setting those options: I'm not sure if the overhead has been benchmarked, and how much it is |
Wondering though; we currently set these options on With 18.09, |
Also, some options that I didn't know about in traefik/traefik#4302, but they may not be applicable for our daemons |
UPDATE: With the exception of some talk below about accounting, the bulk of this has been revised into a much lengthier document: containerd/containerd#7566 (comment) Personal experienceFWIW, this was the cause of some difficult to debug behaviour with a container that ran a daemonized process which initialized by iterating through the range of FDs inherited and closing them. A fairly common practice, although we now have more modern syscalls to do that efficiently (which the program has since fixed upstream). For the maintainers of a Docker image that used that program, this delayed the program start up by 8 minutes (over a billion FDs to loop through), but due to different distros used by maintainers, it originally was not clear how to reproduce until this issue was identified. We've been using I recently saw the comment in the I'm not sure what the actual perf impact is otherwise that the config tries to warn against. But I do know that these When it's not practical to adjust Impact of cited kernel accounting overhead is ambiguousWhen looking into the CPUAccounting, I am wondering if things have changed since the issue opened in 2017?
History of changes with limits to
|
Memory accounting overhead resourceI have a WIP document from earlier in the year when I was active here, that was documenting references regarding accounting overhead, IIRC. I may not get around to resuming work on that to publish it here, but I recently saw this article for the upcoming 6.7 kernel that might be a helpful reference. In particular it demonstrates memory accounting overhead is now minimized for root cgroups, while user cgroups still aren't quite there, but a 30% reduction for both: That is from a micro-benchmark designed to stress such overhead, so in real-world usage the overhead is probably significantly less still? If you are interested in the draft resource I had been working on previously, I could dig it up and provide as-is. Probably not necessary as it's better to have known problems that workarounds can reference, which IIRC wasn't the case for the config concerns here? (status in 2019 was unsure if config overhead disclaimer remained accurate) Overhead disclaimer
FWIW for anybody landing here and reading the above earlier discussion on
This observation of mine may only have been due to The now reasonable
Additionally with the earlier 2019 comment in this discussion regarding
|
We currently have two different unit files; one for
.deb
based packages, and one for.rpm
. The.rpm
version currently assumes systemd 226 or older, which is correct for CentOS and RHEL (RHEL 7.4 usessystemd-219-42.el7.x86_64
), but incorrect for (at least) Fedora.Default install of Docker CE 17.07 on Fedora 26:
Version of systemd running:
Things to notice;
LimitNOFILE
,LimitNPROC
andLimitCORE
toinfinity
to prevent overhead due to accountingTasksMax
as it's not supported on older versions of systemd (and those versions are not affected by systemd setting a low value)Configuring TasksMax
On newer versions of
systemd
we should setTasksMax
because the default set by systemd is too low. All docker processes, including containers are started as a child ofdockerd
, so 4915 processes can easiliy be reached on bigger servers (see moby/moby#23332) (Looks like the Limit was raised since the original limit of 512 systemd/systemd#3211)In our
.deb
packages we automatically set this option based on systemd version; we should have a similar approach for our RPM packages.From the systemd man-page:
Disable accounting (if possible)
Reading this blog post; Enable CPU and Memory accounting for docker (or any systemd service) I found that systemd has options to disable accounting. We should consider using these options instead of setting the limits to
infinity
(which does have the same effect). I have not found yet which version of systemd introduced these options.The following options are available (see systemd.resource-control;
MemoryAccounting=no
TasksAccounting=no
(same result as our currentTasksMax=infinity
)CPUAccounting
IOAccounting=no
(replacesBlockIOAccounting
)BlockIOAccounting=no
(deprecated, seeIOAccounting
)The defaults on Fedora 26 look like this;
Questions to answer
xxAccounting
options?The text was updated successfully, but these errors were encountered: