-
Notifications
You must be signed in to change notification settings - Fork 2.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
fix pod cgroup lifecycle #19888
fix pod cgroup lifecycle #19888
Conversation
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: giuseppe The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
9cf763b
to
48e6b6b
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM assuming happy tests.
48e6b6b
to
32c8ea7
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Two suggestions that might make it easier to diagnose test failures.
Speaking of test failures, the ones in this CI run look real but they're not in test code that I understand.
Changes LGTM |
Code changes LGTM, however, the tests are very red. |
1085f2a
to
de31714
Compare
This regresses cockpit-podman. The pod does not show mounted volumes any more . This is the first time these failed in podman, so we now need to learn how to work together to efficiently debug them. But let's first see what the other tests say (many are still running) -- if they fail as well, then these are probably easier to debug for you. |
I can take a look right now. Is there a way to replicate the issue from the podman CLI? |
I am trying locally with cockpit now. Do you have a screenshot with the expected output? Or how was the pod created so I can test with a working podman version? |
A CLI reproducer does not automatically pop out, as c-podman uses the REST API. I (and the whole cockpit team) are travelling back from a sprint right now, and I don't currently have enough internet to investigate with the built COPR here, but I'll do that first thing on Monday. But that shouldn't actually be that hard: High level, it creates a pod, and queries the systemctl --user start podman.socket
podman pod create -v /tmp:/hosttmp p1
curl --unix $XDG_RUNTIME_DIR/podman/podman.sock http://none/v1.12/libpod/pods/p1/json from that, look at the "mounts":[{"Type":"bind","Source":"/tmp","Destination":"/hosttmp","Driver":"","Mode":"","Options":["nosuid","nodev","rbind"],"RW":true,"Propagation":"rprivate"}] That's not actually what cockpit-podman looks at for this test, though, but it 's a good first spot-check. For the "real" thing, get the infra container name from
replace 2892d5ae6ccf-infra with the actual name, of course. On current podman, this looks like "Mounts":[{"Type":"bind","Source":"/tmp","Destination":"/hosttmp","Driver":"","Mode":"","Options":["nosuid","nodev","rbind"],"RW":true,"Propagation":"rprivate"}] and it looks like with this PR it becomes emtpy. Are you able to quickly test that with your version? If that succeeds, I'll need to be at a workplace that's better than a moving train 😁 Thanks! |
I tried the above in a F38 VM with the build here, and running commands as root. I. e. system podman, not user (that's also what that test uses):
And that has no "Id":"af07236b92f6fcc9427fd39ac1420fbbfc9b8f5d0f7ae20d1a19a7717587c1c8","Name":"p1","Created":"2023-09-08T12:50:19.293219092Z","CreateCommand":["podman","pod","create","-v","/tmp:/hosttmp","p1"],"ExitPolicy":"continue","State":"Created","Hostname":"","CreateCgroup":true,"CgroupParent":"machine.slice","CgroupPath":"machine.slice/machine-libpod_pod_af07236b92f6fcc9427fd39ac1420fbbfc9b8f5d0f7ae20d1a19a7717587c1c8.slice","CreateInfra":true,"InfraContainerID":"9b365ca8c8ee0b7a0620c4d30f7095e2156b10925ba10625640cf3c420a0b4e1","InfraConfig":{"PortBindings":{},"HostNetwork":false,"StaticIP":"","StaticMAC":"","NoManageResolvConf":false,"DNSServer":null,"DNSSearch":null,"DNSOption":null,"NoManageHosts":false,"HostAdd":null,"Networks":["podman"],"NetworkOptions":null,"pid_ns":"private","userns":"host","uts_ns":"private"},"SharedNamespaces":["ipc","net","uts"],"NumContainers":1,"Containers":[{"Id":"9b365ca8c8ee0b7a0620c4d30f7095e2156b10925ba10625640cf3c420a0b4e1","Name":"af07236b92f6-infra","State":"created"}],"LockNumber":0} and introspecting the infra container:
has So that's a nice CLI reproducer, and should also be not too hard for your own tests? Cheers! (I'm really happy that this works as intended!) |
move the code to remove the pod cgroup to a separate function. It is a preparation for the next patch. Signed-off-by: Giuseppe Scrivano <[email protected]>
Signed-off-by: Giuseppe Scrivano <[email protected]>
When the pod is stopped, we need to destroy the pod cgroup, otherwise it is leaked. Signed-off-by: Giuseppe Scrivano <[email protected]>
accept only the resources to be used by the pod, so that the function can more easily be used by a successive patch. Signed-off-by: Giuseppe Scrivano <[email protected]>
do not create the pod cgroup if it already exists. Signed-off-by: Giuseppe Scrivano <[email protected]>
a pod can use cgroups without an infra container. Signed-off-by: Giuseppe Scrivano <[email protected]>
This allows to use --share-parent with --infra=false, so that the containers in the pod can share the parent cgroup. Signed-off-by: Giuseppe Scrivano <[email protected]>
When the infra container is not created, we can still set limits on the pod cgroup. Signed-off-by: Giuseppe Scrivano <[email protected]>
When a container is created and it is part of a pod, we ensure the pod cgroup exists so limits can be applied on the pod cgroup. Closes: containers#19175 Signed-off-by: Giuseppe Scrivano <[email protected]>
This test checks that the pod cgroups are created and that the limits set for a pod cgroup are enforced also after a reboot. Signed-off-by: Giuseppe Scrivano <[email protected]>
de31714
to
0c75eac
Compare
I think the culprit is the following patch:
I'll drop it from the current series |
I've opened a different PR for that issue: #19902 |
cockpit is happy again. |
/lgtm |
This was very smooth, thanks @giuseppe ! That's exactly how I wanted these to work 🎉 |
/hold |
/hold cancel |
while investigating 19175, I've found a bunch of issues with the way we handle the pod cgroup:
More details in the individual commits. There is some space for improvements, e.g. not trying to guess the systemd path but we would need to fix it in c/common first, we can do this incrementally later.
Closes: #19175
Does this PR introduce a user-facing change?