Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Podman info add support for status of standard available cgroup controllers #10387

Merged
merged 1 commit into from
May 24, 2021

Conversation

flouthoc
Copy link
Collaborator

@flouthoc flouthoc commented May 18, 2021

Following PR adds support for reflecting availability status of cgroup controllers to info. Patch attempts to resolve #10306

@@ -27,6 +27,9 @@ type HostInfo struct {
BuildahVersion string `json:"buildahVersion"`
CgroupManager string `json:"cgroupManager"`
CGroupsVersion string `json:"cgroupVersion"`
MemoryLimit bool `json:"memoryLimit"`
Copy link
Member

@rhatdan rhatdan May 18, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this would be better in a subgroup,

 CGroupInfo {
     CPUShares...
     MemoryLimit ...
     PidsLimit ...
}

And Alphabetized.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@rhatdan Sure makes sense, much cleaner i'll do the required changes.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@rhatdan This is resolved. But format might change as there is a discussion going below for CgroupControllers []string

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@rhatdan Made some changes in latest commit , added CgroupControllers []string instead. Could you please review that.

@flouthoc flouthoc force-pushed the cgroupv1-v2-info branch 3 times, most recently from 0bbbf15 to 2bbdfcc Compare May 18, 2021 20:27
pkg/cgroups/cgroups.go Outdated Show resolved Hide resolved
libpod/info.go Outdated
info := define.HostInfo{
Arch: runtime.GOARCH,
BuildahVersion: buildah.Version,
CgroupManager: r.config.Engine.CgroupManager,
MemoryLimit: availableControllers["memory"],
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Wondering it may make more sense to just return CgroupControllers []string here.

(Docker-compatible REST API should emulate Docker REST API, though)

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@AkihiroSuda docker info spits boolean for separate controllers so i tried keeping it same. But i guess a common string for all available controllers would be better. WDYT ?

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, for Podman CLI, returning the common string slice SGTM.
Docker-compatible REST API should retain Docker-compatible booleans.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@AkihiroSuda I tried this but dumping all the available controllers is making info too noisy, for instance on my machine even single liner has too much content. I guess docker only prints standard controllers maybe because this is too verbose but i am cool with this.
On my machine info shows this

cgroupControllers:
  - blkio
  - cpu
  - cpu,cpuacct
  - cpuacct
  - cpuset
  - devices
  - freezer
  - hugetlb
  - memory
  - net_cls
  - net_cls,net_prio
  - net_prio
  - perf_event
  - pids
  - rdma
  - systemd
  - unified

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For cgroup v1, the right way is to parse /proc/cgroups

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@AkihiroSuda Would still give the same output as i have all controllers enabled on v1

$ cat /proc/cgroups
#subsys_name	hierarchy	num_cgroups	enabled
cpuset	0	230	1
cpu	0	230	1
cpuacct	0	230	1
blkio	0	230	1
memory	0	230	1
devices	0	230	1
freezer	0	230	1
net_cls	0	230	1
perf_event	0	230	1
net_prio	0	230	1
hugetlb	0	230	1
pids	0	230	1
rdma	0	230	1

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

not "same" :)

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@AkihiroSuda ah i get it. So i guess logic for existing getAvailableControllers https://github.com/containers/podman/blob/master/pkg/cgroups/cgroups.go#L129 for cgroupv1 needs to be changed or i can add a new function but dont think redundant code for similar task is a good idea. @rhatdan @giuseppe @AkihiroSuda If we can mutually agree upon having CgroupControllers []string in info then i think we can proceed with removing old logic in master and use /proc/cgroups for cgroupv1

Copy link
Member

@giuseppe giuseppe May 21, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

cgroupv1 is not really supported for rootless, even if manually chowned as the delegation is unsafe.

For root, we can parse /proc/cgroups or /proc/self/cgroup but for rootless we can just force every controller to be disabled.

Copy link
Collaborator Author

@flouthoc flouthoc May 22, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@giuseppe @AkihiroSuda Thanks a lot . I have added changes, this is resolved in latest commit

@flouthoc flouthoc force-pushed the cgroupv1-v2-info branch 2 times, most recently from 192a979 to 4dbfed7 Compare May 19, 2021 14:31
@rhatdan
Copy link
Member

rhatdan commented May 19, 2021

Please update the man page with the new output.

@flouthoc flouthoc force-pushed the cgroupv1-v2-info branch 2 times, most recently from 3cb56a7 to 440c098 Compare May 20, 2021 04:05
@flouthoc
Copy link
Collaborator Author

@rhatdan Updated Man pages.

// rootless cgroupv2: check available controllers for current user , systemd or servicescope will inherit
if rootless.IsRootless() {
uid := rootless.GetRootlessUID()
subtreeControl = fmt.Sprintf("%s/user.slice/user-%d.slice/cgroup.subtree_control", cgroupRoot, uid)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

IIUC the controllers there are not always available for rootless users.
Reading the cgorup.subtree_control of Podman process itself might be better (it is still not robust, either, but at least better...)

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We cant use subtree created by podman process as it wouldn't exist , if podman info is called prior to creating any container.

Copy link
Collaborator Author

@flouthoc flouthoc May 20, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

also i think podman cleans up its entries as soon as running container exits. Not sure.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yes the problem is that some controllers are not enabled for the rootless user (and we don't really know what controllers are enabled until we do the equivalent of systemd-run --scope --user with systemd). So I agree with @AkihiroSuda that reading the current enabled controllers is a best effort way of doing it, which should be fine in most cases

Copy link
Collaborator Author

@flouthoc flouthoc May 22, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@giuseppe @AkihiroSuda Thanks a lot . I have added changes, this is resolved in latest commit

@flouthoc flouthoc force-pushed the cgroupv1-v2-info branch 2 times, most recently from ebaaefa to d44d049 Compare May 22, 2021 07:24
@flouthoc
Copy link
Collaborator Author

@giuseppe @AkihiroSuda @rhatdan I have addressed all the comments in latest commit. Could you guys please take a look.

@flouthoc
Copy link
Collaborator Author

Missing test is a flake i think.

@flouthoc flouthoc requested a review from AkihiroSuda May 22, 2021 09:26
@flouthoc flouthoc force-pushed the cgroupv1-v2-info branch from d44d049 to 0bf92a0 Compare May 22, 2021 09:47
@flouthoc
Copy link
Collaborator Author

It was a flake force push solved it. 😃

@flouthoc
Copy link
Collaborator Author

yay!!! thanks @AkihiroSuda waiting for @giuseppe and @rhatdan 's approval.

pkg/cgroups/cgroups.go Outdated Show resolved Hide resolved
pkg/cgroups/cgroups.go Outdated Show resolved Hide resolved
@flouthoc flouthoc force-pushed the cgroupv1-v2-info branch 2 times, most recently from 60e28ab to 7e70b33 Compare May 24, 2021 10:05
@flouthoc flouthoc force-pushed the cgroupv1-v2-info branch from 7e70b33 to 2f5552c Compare May 24, 2021 11:25
@mheon
Copy link
Member

mheon commented May 24, 2021

Restarted a single flake.
/approve
LGTM

@openshift-ci openshift-ci bot added the approved Indicates a PR has been approved by an approver from all required OWNERS files. label May 24, 2021
@flouthoc
Copy link
Collaborator Author

@mheon Thanks 😄

@flouthoc
Copy link
Collaborator Author

@mheon How do we restart single flake ? There is still one check which has a flake.

Copy link
Member

@giuseppe giuseppe left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@openshift-ci
Copy link
Contributor

openshift-ci bot commented May 24, 2021

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: AkihiroSuda, flouthoc, giuseppe, mheon

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@giuseppe
Copy link
Member

/retest

@mheon
Copy link
Member

mheon commented May 24, 2021

/lgtm

@flouthoc I click the "Details" link on the test in question. The Github summary page it takes you to should have a "Retest" button if the test is red.

@openshift-ci openshift-ci bot added the lgtm Indicates that a PR is ready to be merged. label May 24, 2021
@openshift-merge-robot openshift-merge-robot merged commit 4d6b66a into containers:master May 24, 2021
@github-actions github-actions bot added the locked - please file new issue/PR Assist humans wanting to comment on an old issue or PR with locked comments. label Sep 23, 2023
@github-actions github-actions bot locked as resolved and limited conversation to collaborators Sep 23, 2023
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
approved Indicates a PR has been approved by an approver from all required OWNERS files. lgtm Indicates that a PR is ready to be merged. locked - please file new issue/PR Assist humans wanting to comment on an old issue or PR with locked comments.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

podman info - print available cgroup controllers
6 participants