You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
@ErickStaal yes, please, at least the output of kubectl logs and kubectl describe for the failed pods. But it will be great if you can find a simple repro, ideally that doesn't need a ceph cluster.
Description
Rook (Ceph) fails starting correctly after upgrading to runc v1.2.0. Rolling back to runc v1.1.15 fixes all errors.
Steps to reproduce the issue
rook-ceph rook-ceph-mds-k8sfs-a-65588bd59d-d9ccf 1/2 CrashLoopBackOff 215 (53s ago) 19h
rook-ceph rook-ceph-mds-k8sfs-b-686bdc8d8d-kk498 1/2 CrashLoopBackOff 67 (50s ago) 5h56m
rook-ceph rook-ceph-mgr-b-58f9d6576b-4df8v 2/3 CrashLoopBackOff 333 (51s ago) 19h
I checked the output of kubectl describe nodes. There was no memory or storage pressure on the nodes.
Describe the results you received and expected
rook starting just like under runc v1.1.15
What version of runc are you using?
v1.1.15 (I rolled back from v1.2.0 and Everything works again).
Host OS information
PRETTY_NAME="Ubuntu 24.04.1 LTS"
Host kernel information
Linux 6.8.0-47-generic #47-Ubuntu SMP PREEMPT_DYNAMIC Fri Sep 27 21:40:26 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux
(on all Kubernetes nodes).
The text was updated successfully, but these errors were encountered: