-
Notifications
You must be signed in to change notification settings - Fork 2.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Podman info add support for status of standard available cgroup controllers #10387
Podman info add support for status of standard available cgroup controllers #10387
Conversation
libpod/define/info.go
Outdated
@@ -27,6 +27,9 @@ type HostInfo struct { | |||
BuildahVersion string `json:"buildahVersion"` | |||
CgroupManager string `json:"cgroupManager"` | |||
CGroupsVersion string `json:"cgroupVersion"` | |||
MemoryLimit bool `json:"memoryLimit"` |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think this would be better in a subgroup,
CGroupInfo {
CPUShares...
MemoryLimit ...
PidsLimit ...
}
And Alphabetized.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@rhatdan Sure makes sense, much cleaner i'll do the required changes.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@rhatdan This is resolved. But format might change as there is a discussion going below for CgroupControllers []string
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@rhatdan Made some changes in latest commit , added CgroupControllers []string
instead. Could you please review that.
0bbbf15
to
2bbdfcc
Compare
libpod/info.go
Outdated
info := define.HostInfo{ | ||
Arch: runtime.GOARCH, | ||
BuildahVersion: buildah.Version, | ||
CgroupManager: r.config.Engine.CgroupManager, | ||
MemoryLimit: availableControllers["memory"], |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Wondering it may make more sense to just return CgroupControllers []string
here.
(Docker-compatible REST API should emulate Docker REST API, though)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@AkihiroSuda docker info
spits boolean for separate controllers so i tried keeping it same. But i guess a common string for all available controllers would be better. WDYT ?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, for Podman CLI, returning the common string slice SGTM.
Docker-compatible REST API should retain Docker-compatible booleans.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@AkihiroSuda I tried this but dumping all the available controllers is making info
too noisy, for instance on my machine even single liner has too much content. I guess docker only prints standard controllers maybe because this is too verbose but i am cool with this.
On my machine info
shows this
cgroupControllers:
- blkio
- cpu
- cpu,cpuacct
- cpuacct
- cpuset
- devices
- freezer
- hugetlb
- memory
- net_cls
- net_cls,net_prio
- net_prio
- perf_event
- pids
- rdma
- systemd
- unified
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For cgroup v1, the right way is to parse /proc/cgroups
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@AkihiroSuda Would still give the same output as i have all controllers enabled on v1
$ cat /proc/cgroups
#subsys_name hierarchy num_cgroups enabled
cpuset 0 230 1
cpu 0 230 1
cpuacct 0 230 1
blkio 0 230 1
memory 0 230 1
devices 0 230 1
freezer 0 230 1
net_cls 0 230 1
perf_event 0 230 1
net_prio 0 230 1
hugetlb 0 230 1
pids 0 230 1
rdma 0 230 1
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
not "same" :)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@AkihiroSuda ah i get it. So i guess logic for existing getAvailableControllers
https://github.com/containers/podman/blob/master/pkg/cgroups/cgroups.go#L129 for cgroupv1 needs to be changed or i can add a new function but dont think redundant code for similar task is a good idea. @rhatdan @giuseppe @AkihiroSuda If we can mutually agree upon having CgroupControllers []string
in info
then i think we can proceed with removing old logic in master
and use /proc/cgroups
for cgroupv1
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
cgroupv1 is not really supported for rootless, even if manually chowned as the delegation is unsafe.
For root, we can parse /proc/cgroups
or /proc/self/cgroup
but for rootless we can just force every controller to be disabled.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@giuseppe @AkihiroSuda Thanks a lot . I have added changes, this is resolved in latest commit
192a979
to
4dbfed7
Compare
Please update the man page with the new output. |
3cb56a7
to
440c098
Compare
@rhatdan Updated Man pages. |
pkg/cgroups/cgroups.go
Outdated
// rootless cgroupv2: check available controllers for current user , systemd or servicescope will inherit | ||
if rootless.IsRootless() { | ||
uid := rootless.GetRootlessUID() | ||
subtreeControl = fmt.Sprintf("%s/user.slice/user-%d.slice/cgroup.subtree_control", cgroupRoot, uid) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
IIUC the controllers there are not always available for rootless users.
Reading the cgorup.subtree_control of Podman process itself might be better (it is still not robust, either, but at least better...)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We cant use subtree created by podman process as it wouldn't exist , if podman info
is called prior to creating any container.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
also i think podman cleans up its entries as soon as running container exits. Not sure.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yes the problem is that some controllers are not enabled for the rootless user (and we don't really know what controllers are enabled until we do the equivalent of systemd-run --scope --user
with systemd). So I agree with @AkihiroSuda that reading the current enabled controllers is a best effort way of doing it, which should be fine in most cases
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@giuseppe @AkihiroSuda Thanks a lot . I have added changes, this is resolved in latest commit
ebaaefa
to
d44d049
Compare
@giuseppe @AkihiroSuda @rhatdan I have addressed all the comments in latest commit. Could you guys please take a look. |
Missing test is a flake i think. |
d44d049
to
0bf92a0
Compare
It was a flake force push solved it. 😃 |
yay!!! thanks @AkihiroSuda waiting for @giuseppe and @rhatdan 's approval. |
60e28ab
to
7e70b33
Compare
Signed-off-by: flouthoc <[email protected]>
7e70b33
to
2f5552c
Compare
Restarted a single flake. |
@mheon Thanks 😄 |
@mheon How do we restart single flake ? There is still one check which has a flake. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: AkihiroSuda, flouthoc, giuseppe, mheon The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
/retest |
/lgtm @flouthoc I click the "Details" link on the test in question. The Github summary page it takes you to should have a "Retest" button if the test is red. |
Following PR adds support for reflecting availability status of cgroup controllers to
info
. Patch attempts to resolve #10306