-
Notifications
You must be signed in to change notification settings - Fork 6.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Large containerd Snapshot Directories and Cleanup Assistance #10195
Comments
@modigithub Do you have an answer now? I had the same problem and had the same doubt |
Hi @Vieufoux, No, I haven't found a solution yet. Moreover, I'm not even sure if kubespray is the appropriate place for this issue, as it seems to me that containerd is actually the root of the problem. Perhaps someone will have the time to answer my questions. I would really appreciate any insights or assistance from the community on this issue. Kind regards, |
I actually have the same problem and I am looking for a solution. I found that you can change Apply Changes: After saving the configuration, restart the kubelet service: |
This isn't that large ^. |
@VannTen: Closing this issue. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
I have been encountering an issue where the directories io.containerd.snapshotter.v1.overlayfs, io.containerd.snapshotter.v1.native and io.containerd.content.v1.content in /var/lib/containerd are occupying a large amount of space:
io.containerd.snapshotter.v1.overlayfs: 6.2GB
io.containerd.snapshotter.v1.native: 3.5GB
io.containerd.content.v1.content: 2.5GB
I have attempted to use crictl rmi --prune to clean up unneeded images, and I have tried creating a script to remove non-running containers, but the directories still remain large. I have also reviewed the containerd and Kubernetes documentation, but I'm still unclear about why these directories are so large and how to properly clean them. i also created an sh script to cleanup:
containers=$(crictl ps -a -q)
for container in $containers
do
container_state=$(crictl inspect $container | grep ""state"" | awk -F" '{print $4}')
done
Questions:
Is there any known issue in the specific version of containerd or Kubernetes that might be causing this?
Environment:
Client Version: version.Info{Major:"1", Minor:"25", GitVersion:"v1.25.4", GitCommit:"872a965c6c6526caa949f0c6ac028ef7aff3fb78", GitTreeState:"clean", BuildDate:"2022-11-09T13:36:36Z", GoVersion:"go1.19.3", Compiler:"gc", Platform:"linux/amd64"}
Kustomize Version: v4.5.7
Server Version: version.Info{Major:"1", Minor:"25", GitVersion:"v1.25.4", GitCommit:"872a965c6c6526caa949f0c6ac028ef7aff3fb78", GitTreeState:"clean", BuildDate:"2022-11-09T13:29:58Z", GoVersion:"go1.19.3", Compiler:"gc", Platform:"linux/amd64"}
Debian GNU/Linux 11 (bullseye)
Kernel (Linux Server 5.10.0-17-amd64 Calico and host reboot issue #1 SMP Debian 5.10.136-1 (2022-08-13) x86_64 GNU/Linux):
git rev-parse --short HEAD
): 31d7e64The text was updated successfully, but these errors were encountered: