-
Notifications
You must be signed in to change notification settings - Fork 1.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Support cgroup v2 in runsc #3481
Comments
Hi, we are slowly thinking about cgroups v2, it would be nice to know if this is on the roadmap. |
This work is not staffed right now. We're planning to pick this up early next year. |
@fvoznika has there been any progress on this issue ? I'm planning to spend some time to work on this if possible ( since we are planning to migrate to cgroupv2 very soon ), so wonder if we can wait or start a collaboration effort on this. |
No progress yet. It would be great if you could get started on it. |
Looking at this, i think my plan roughly is:
|
Thanks for spelling out your plan. We try to avoid adding dependencies as much as possible to have tight control over the code that is included in So instead of replacing type Cgroup interface {
Install(res *specs.LinuxResources) error
Uninstall() error
Join() (func(), error)
CPUQuota() (float64, error)
NumCPU() (int, error)
MemoryLimit() (uint64, error)
} Re: testing, that's a good question. We have cgroups integration test in root/cgroup_test.go. We can make sure that the images used to run this test has support for cgroups v2, otherwise nested virtualization is also an option. |
@fvoznika some updates. First of all, it's working
I indeed abandoned the requirement for cgroupv1 changes to reuse libcontainer's cgroup interface, since that's quite complicated to do 1-1 feature set and still preserve backward compatibility. We use libcontainer's cgroup interface only for v2 and switch back and forth depends on the v2 detection. The current interface is:
I'm passing name in to reconstruct the
Now we are at the stage where we figure out how to pass most integration tests. I don't think the images will need any additional support, just that the integration tests will need to be adjusted because not all v1 values will be mapped to v2. Look like the CRI setup will need some changes too. I'm testing this inside a vagrant VM similar to how containerd/runc is doing this, so it can be mapped into CI that can support nested virtualizations. |
I've created the feature branch https://github.com/google/gvisor/tree/feature/cgroupv2. Let's continue the cgroupv2 development there. Then when it will be ready, we will merge it to the master branch. TODO list:
This list is based on @fvoznika comments for #5453 that have not been addressed. |
Hi again ! Sorry for some inactive period, i was busy with some other projects. @avagin https://github.com/google/gvisor/tree/feature/cgroupv2 is good, what's the development process here ? I think maybe we can split the patchset into 2 parts ( 1 is to bump dependencies and create cgroup interface for v1 and v2 cooperations ), and the second is to add v2 support.
The PR uses vagrant to setup a v2 environment. It would be great if someone with CI access can setup that up, either with vagrant, or with a build agent that runs cgroupv2. I don't have CI access so the feedback loop is terrible here.
I think ideally we want to have some shared libraries here that different cgroup consumers can use. Currently it's uses runc cgroupv2 implementation. But there's also desire to unify the cgroup implementation with containerd/cgroups ( see opencontainers/runc#3007 ). Is that acceptable ?
I will need to take a look at this again to see if we can still keep 1.3 compat (maybe possible but we have to reimplement a bunch of things iirc ). The simplest option is of course to bump required version of containerd to 1.4, is there any plan to do that ? |
All new PR-s about cgroupv2 should be created to this branch.
It is up to you, but you need to remember that we want to avoid any new external dependencies without real reasons. We can consider to copy-paste some code from runc, I think the license allows us to do this.
I will help with this, but let's solve other todo-s first.
It depends on a few things. The main idea is that we want to be able to review all code that we use. It means that a new library should have a limit number of new external dependencies and it has to be relatively small (does minimal things that we will not use). |
I'm repackaging the patchset to make reviewing and testing simpler:
|
I will help with that. I am going to add cgroup2 workers in buildkite. |
runcs uses cgroups V1 to set pod limits. Kubernetes is switching over to use cgroups V2, it's alpha in 1.19 and will possibly hit beta in 1.20.
Relevant links:
SIG-node cgroups KEP
containerd issue
runc issue
The text was updated successfully, but these errors were encountered: