-
Notifications
You must be signed in to change notification settings - Fork 2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
executor: stop joining executor to container cgroup #6839
Conversation
Stop joining libcontainer executor process into the newly created task container cgroup, to ensure that the cgroups are fully destroyed on shutdown, and to make it consistent with other plugin processes. Previously, executor process is added to the container cgroup so the executor process resources get aggregated along with user processes in our metric aggregation. However, adding executor process to container cgroup adds some complications with much benefits: First, it complicates cleanup. We must ensure that the executor is removed from container cgroup on shutdown. Though, we had a bug where we missed removing it from the systemd cgroup. Because executor uses `containerState.CgroupPaths` on launch, which includes systemd, but `cgroups.GetAllSubsystems` which doesn't. Second, it may have advese side-effects. When a user process is cpu bound or uses too much memory, executor should remain functioning without risk of being killed (by OOM killer) or throttled. Third, it is inconsistent with other drivers and plugins. Logmon and DockerLogger processes aren't in the task cgroups. Neither are containerd processes, though it is equivalent to executor in responsibility. Fourth, in my experience when executor process moves cgroup while it's running, the cgroup aggregation is odd. The cgroup `memory.usage_in_bytes` doesn't seem to capture the full memory usage of the executor process and becomes a red-harring when investigating memory issues. For all the reasons above, I opted to have executor remain in nomad agent cgroup and we can revisit this when we have a better story for plugin process cgroup management.
c1ff6ec
to
f794b49
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks great.
I think executor being in the cgroups was vestigial from before we used libcontainer and executor had to enter the cgroups before fork/execing the task process. Since we're leaving that double forking up to libcontainer I think you outline the compelling reasons to remove the executor from cgroups.
Could you add a test that asserts the executor process is not cgrouped? The added test only appears to assert the behavior of the task's cgroups.
Otherwise LGTM.
I've toyed with this but ultimately didn't like any of the approaches, and we can follow up with a test after the PR is merged. The issue is that to test for the negative (lack of change), we must start a long running task, sleep for enough time and check self cgroup processes; otherwise, we risk test succeeding because timing effects rather than because the cgroup didn't move. Also, for test to fail, a developer need to explicitly move the task cgroup by making a series of method calls, rather than accidentally or implicitly. As such, I believe adding such a test will slow test suite without helping us protect against future regression. Open for suggestions? I suspect a comment or a general exec driver design doc would suffice. |
Ah that does sound tricky @notnoop. I think I've written a test before that has a task do |
I'm going to lock this pull request because it has been closed for 120 days ⏳. This helps our maintainers find and focus on the active contributions. |
Stop joining libcontainer executor process into the newly created task
container cgroup, to ensure that the cgroups are fully destroyed on
shutdown, and to make it consistent with other plugin processes.
Previously, executor process is added to the container cgroup so the
executor process resources get aggregated along with user processes in
our metric aggregation.
However, adding executor process to container cgroup adds some
complications without much benefits:
First, it complicates cleanup. We must ensure that the executor is
removed from container cgroup on shutdown. Though, we had a bug where
we missed removing it from the systemd cgroup. Because executor uses
containerState.CgroupPaths
on launch, which includes systemd, butcgroups.GetAllSubsystems
which doesn't.Second, it may have adverse side-effects. When a user process is cpu
bound or uses too much memory, executor should remain functioning
without risk of being killed (by OOM killer) or throttled.
Third, it is inconsistent with other drivers and plugins. Logmon and
DockerLogger processes aren't in the task cgroups. Neither are
containerd processes, though it is equivalent to executor in
responsibility.
Fourth, in my experience when executor process moves cgroup while it's
running, the cgroup aggregation is odd. The cgroup
memory.usage_in_bytes
doesn't seem to capture the full memory usage ofthe executor process and becomes a red-harring when investigating memory
issues.
For all the reasons above, I opted to have executor remain in nomad
agent cgroup and we can revisit this when we have a better story for
plugin process cgroup management.
Fixes #6823 .
I've added a test to capture the problem above - it's failing in https://circleci.com/gh/hashicorp/nomad/25824