Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

SIGSEGV: segmentation violation code=0x2 addr=0xc001302ce9 pc=0x2986024] on dind (grpc.(*ClientConn).resolveNow) #49285

Open
heyvito opened this issue Jan 16, 2025 · 8 comments
Labels
kind/bug Bugs are bugs. The cause may or may not be known at triage time so debugging may be needed. status/more-info-needed status/0-triage version/27.5

Comments

@heyvito
Copy link

heyvito commented Jan 16, 2025

Description

Upon starting dockerd the process terminates with SIGSEGV: segmentation violation code=0x2 addr=0xc001302ce9 pc=0x2986024]

Full log
cat: can't open '/proc/net/arp_tables_names': No such file or directory
iptables v1.8.10 (nf_tables)
time="2025-01-16T10:25:23.359110652Z" level=info msg="Starting up"
time="2025-01-16T10:25:23.359609123Z" level=info msg="containerd not running, starting managed containerd"
time="2025-01-16T10:25:23.360487902Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=161
unexpected fault address 0xc001302ce9
fatal error: fault
[signal SIGSEGV: segmentation violation code=0x2 addr=0xc001302ce9 pc=0x2986024]
goroutine 57 gp=0xc0005a3500 m=11 mp=0xc0005d2808 [running]:
runtime.throw({0x9000fb?, 0x8000?})
	/usr/local/go/src/runtime/panic.go:1023 +0x5c fp=0xc000575dd8 sp=0xc000575da8 pc=0x213b5dc
runtime.sigpanic()
	/usr/local/go/src/runtime/signal_unix.go:895 +0x285 fp=0xc000575e38 sp=0xc000575dd8 pc=0x2154285
google.golang.org/grpc.(*ClientConn).resolveNow(0xc0005a7008, {})
	/go/src/github.com/docker/docker/vendor/google.golang.org/grpc/clientconn.go:1081 +0x44 fp=0xc000575e60 sp=0xc000575e38 pc=0x2986024
google.golang.org/grpc.(*addrConn).resetTransportAndUnlock(0xc000322c08)
	/go/src/github.com/docker/docker/vendor/google.golang.org/grpc/clientconn.go:1254 +0x258 fp=0xc000575f78 sp=0xc000575e60 pc=0x2986a38
google.golang.org/grpc.(*addrConn).connect(0xc000322c08)
	/go/src/github.com/docker/docker/vendor/google.golang.org/grpc/clientconn.go:912 +0x145 fp=0xc000575fc8 sp=0xc000575f78 pc=0x2985005
google.golang.org/grpc.(*acBalancerWrapper).Connect.gowrap1()
	/go/src/github.com/docker/docker/vendor/google.golang.org/grpc/balancer_wrapper.go:300 +0x25 fp=0xc000575fe0 sp=0xc000575fc8 pc=0x297f9a5
runtime.goexit({})
	/usr/local/go/src/runtime/asm_amd64.s:1695 +0x1 fp=0xc000575fe8 sp=0xc000575fe0 pc=0x2176d41
created by google.golang.org/grpc.(*acBalancerWrapper).Connect in goroutine 56
	/go/src/github.com/docker/docker/vendor/google.golang.org/grpc/balancer_wrapper.go:300 +0x56
goroutine 1 gp=0xc0000061c0 m=nil [select, locked to thread]:
runtime.gopark(0xc0006c02e8?, 0x2?, 0x90?, 0x1?, 0xc0006c0254?)
	/usr/local/go/src/runtime/proc.go:402 +0xce fp=0xc0006c00f8 sp=0xc0006c00d8 pc=0x213e62e
runtime.selectgo(0xc0006c02e8, 0xc0006c0250, 0x1c0?, 0x0, 0xd27f10?, 0x1)
	/usr/local/go/src/runtime/select.go:327 +0x725 fp=0xc0006c0218 sp=0xc0006c00f8 pc=0x21504a5
github.com/docker/docker/libcontainerd/supervisor.Start({0xd5bd58, 0xc000114fa0}, {0xc0005a05a0?, 0x22461b5?}, {0xc0005a05c0, 0x1a}, {0xc000175460, 0x3, 0x3fba2c0?})
	/go/src/github.com/docker/docker/libcontainerd/supervisor/remote_daemon.go:104 +0x5a5 fp=0xc0006c03c0 sp=0xc0006c0218 pc=0x3d93685
main.(*DaemonCli).initContainerd(0xc0005e23c0, {0xd5bd58, 0xc000114fa0})
	/go/src/github.com/docker/docker/cmd/dockerd/daemon_unix.go:141 +0x24e fp=0xc0006c04a0 sp=0xc0006c03c0 pc=0x3da184e
main.(*DaemonCli).start(0xc0005e23c0, 0xc000324f00)
	/go/src/github.com/docker/docker/cmd/dockerd/daemon.go:185 +0x8c9 fp=0xc0006c1c40 sp=0xc0006c04a0 pc=0x3d99149
main.runDaemon(...)
	/go/src/github.com/docker/docker/cmd/dockerd/docker_unix.go:13
main.newDaemonCommand.func1(0xc0005ee200?, {0xc0005ea840?, 0x7?, 0x8fed1b?})
	/go/src/github.com/docker/docker/cmd/dockerd/docker.go:40 +0x94 fp=0xc0006c1c70 sp=0xc0006c1c40 pc=0x3da1d74
github.com/spf13/cobra.(*Command).execute(0xc000322308, {0xc000110050, 0x3, 0x3})
	/go/src/github.com/docker/docker/vendor/github.com/spf13/cobra/command.go:985 +0xaca fp=0xc0006c1df8 sp=0xc0006c1c70 pc=0x22f990a
github.com/spf13/cobra.(*Command).ExecuteC(0xc000322308)
	/go/src/github.com/docker/docker/vendor/github.com/spf13/cobra/command.go:1117 +0x3ff fp=0xc0006c1ed0 sp=0xc0006c1df8 pc=0x22fa1df
github.com/spf13/cobra.(*Command).Execute(...)
	/go/src/github.com/docker/docker/vendor/github.com/spf13/cobra/command.go:1041
main.main()
	/go/src/github.com/docker/docker/cmd/dockerd/docker.go:115 +0x1a6 fp=0xc0006c1f50 sp=0xc0006c1ed0 pc=0x3da1fe6
runtime.main()
	/usr/local/go/src/runtime/proc.go:271 +0x29d fp=0xc0006c1fe0 sp=0xc0006c1f50 pc=0x213e1dd
runtime.goexit({})
	/usr/local/go/src/runtime/asm_amd64.s:1695 +0x1 fp=0xc0006c1fe8 sp=0xc0006c1fe0 pc=0x2176d41
goroutine 2 gp=0xc000006c40 m=nil [force gc (idle)]:
runtime.gopark(0x0?, 0x0?, 0x0?, 0x0?, 0x0?)
	/usr/local/go/src/runtime/proc.go:402 +0xce fp=0xc00008cfa8 sp=0xc00008cf88 pc=0x213e62e
runtime.goparkunlock(...)
	/usr/local/go/src/runtime/proc.go:408
runtime.forcegchelper()
	/usr/local/go/src/runtime/proc.go:326 +0xb3 fp=0xc00008cfe0 sp=0xc00008cfa8 pc=0x213e493
runtime.goexit({})
	/usr/local/go/src/runtime/asm_amd64.s:1695 +0x1 fp=0xc00008cfe8 sp=0xc00008cfe0 pc=0x2176d41
created by runtime.init.6 in goroutine 1
	/usr/local/go/src/runtime/proc.go:314 +0x1a
goroutine 18 gp=0xc000102380 m=nil [GC sweep wait]:
runtime.gopark(0x1?, 0x0?, 0x0?, 0x0?, 0x0?)
	/usr/local/go/src/runtime/proc.go:402 +0xce fp=0xc000088780 sp=0xc000088760 pc=0x213e62e
runtime.goparkunlock(...)
	/usr/local/go/src/runtime/proc.go:408
runtime.bgsweep(0xc0000b6000)
	/usr/local/go/src/runtime/mgcsweep.go:318 +0xdf fp=0xc0000887c8 sp=0xc000088780 pc=0x2127d5f
runtime.gcenable.gowrap1()
	/usr/local/go/src/runtime/mgc.go:203 +0x25 fp=0xc0000887e0 sp=0xc0000887c8 pc=0x211c665
runtime.goexit({})
	/usr/local/go/src/runtime/asm_amd64.s:1695 +0x1 fp=0xc0000887e8 sp=0xc0000887e0 pc=0x2176d41
created by runtime.gcenable in goroutine 1
	/usr/local/go/src/runtime/mgc.go:203 +0x66
goroutine 19 gp=0xc000102540 m=nil [GC scavenge wait]:
runtime.gopark(0x10000?, 0xd1e628?, 0x0?, 0x0?, 0x0?)
	/usr/local/go/src/runtime/proc.go:402 +0xce fp=0xc000088f78 sp=0xc000088f58 pc=0x213e62e
runtime.goparkunlock(...)
	/usr/local/go/src/runtime/proc.go:408
runtime.(*scavengerState).park(0x3fb6ee0)
	/usr/local/go/src/runtime/mgcscavenge.go:425 +0x49 fp=0xc000088fa8 sp=0xc000088f78 pc=0x2125709
runtime.bgscavenge(0xc0000b6000)
	/usr/local/go/src/runtime/mgcscavenge.go:658 +0x59 fp=0xc000088fc8 sp=0xc000088fa8 pc=0x2125cb9
runtime.gcenable.gowrap2()
	/usr/local/go/src/runtime/mgc.go:204 +0x25 fp=0xc000088fe0 sp=0xc000088fc8 pc=0x211c605
runtime.goexit({})
	/usr/local/go/src/runtime/asm_amd64.s:1695 +0x1 fp=0xc000088fe8 sp=0xc000088fe0 pc=0x2176d41
created by runtime.gcenable in goroutine 1
	/usr/local/go/src/runtime/mgc.go:204 +0xa5
goroutine 20 gp=0xc000102a80 m=nil [finalizer wait]:
runtime.gopark(0xc00008c648?, 0x210eec5?, 0xa8?, 0x1?, 0xc0000061c0?)
	/usr/local/go/src/runtime/proc.go:402 +0xce fp=0xc00008c620 sp=0xc00008c600 pc=0x213e62e
runtime.runfinq()
	/usr/local/go/src/runtime/mfinal.go:194 +0x107 fp=0xc00008c7e0 sp=0xc00008c620 pc=0x211b627
runtime.goexit({})
	/usr/local/go/src/runtime/asm_amd64.s:1695 +0x1 fp=0xc00008c7e8 sp=0xc00008c7e0 pc=0x2176d41
created by runtime.createfing in goroutine 1
	/usr/local/go/src/runtime/mfinal.go:164 +0x3d
goroutine 21 gp=0xc00028ee00 m=nil [GC worker (idle)]:
runtime.gopark(0x0?, 0x0?, 0x0?, 0x0?, 0x0?)
	/usr/local/go/src/runtime/proc.go:402 +0xce fp=0xc000089750 sp=0xc000089730 pc=0x213e62e
runtime.gcBgMarkWorker()
	/usr/local/go/src/runtime/mgc.go:1310 +0xe5 fp=0xc0000897e0 sp=0xc000089750 pc=0x211e745
runtime.goexit({})
	/usr/local/go/src/runtime/asm_amd64.s:1695 +0x1 fp=0xc0000897e8 sp=0xc0000897e0 pc=0x2176d41
created by runtime.gcBgMarkStartWorkers in goroutine 1
	/usr/local/go/src/runtime/mgc.go:1234 +0x1c
goroutine 22 gp=0xc00028efc0 m=nil [GC worker (idle)]:
runtime.gopark(0x170174425e1?, 0x0?, 0x0?, 0x0?, 0x0?)
	/usr/local/go/src/runtime/proc.go:402 +0xce fp=0xc000089f50 sp=0xc000089f30 pc=0x213e62e
runtime.gcBgMarkWorker()
	/usr/local/go/src/runtime/mgc.go:1310 +0xe5 fp=0xc000089fe0 sp=0xc000089f50 pc=0x211e745
runtime.goexit({})
	/usr/local/go/src/runtime/asm_amd64.s:1695 +0x1 fp=0xc000089fe8 sp=0xc000089fe0 pc=0x2176d41
created by runtime.gcBgMarkStartWorkers in goroutine 1
	/usr/local/go/src/runtime/mgc.go:1234 +0x1c
goroutine 23 gp=0xc00028f180 m=nil [GC worker (idle)]:
runtime.gopark(0x17017442334?, 0x0?, 0x0?, 0x0?, 0x0?)
	/usr/local/go/src/runtime/proc.go:402 +0xce fp=0xc00008a750 sp=0xc00008a730 pc=0x213e62e
runtime.gcBgMarkWorker()
	/usr/local/go/src/runtime/mgc.go:1310 +0xe5 fp=0xc00008a7e0 sp=0xc00008a750 pc=0x211e745
runtime.goexit({})
	/usr/local/go/src/runtime/asm_amd64.s:1695 +0x1 fp=0xc00008a7e8 sp=0xc00008a7e0 pc=0x2176d41
created by runtime.gcBgMarkStartWorkers in goroutine 1
	/usr/local/go/src/runtime/mgc.go:1234 +0x1c
goroutine 24 gp=0xc00028f340 m=nil [GC worker (idle)]:
runtime.gopark(0x170174404b8?, 0x3?, 0xee?, 0x8e?, 0x0?)
	/usr/local/go/src/runtime/proc.go:402 +0xce fp=0xc00008af50 sp=0xc00008af30 pc=0x213e62e
runtime.gcBgMarkWorker()
	/usr/local/go/src/runtime/mgc.go:1310 +0xe5 fp=0xc00008afe0 sp=0xc00008af50 pc=0x211e745
runtime.goexit({})
	/usr/local/go/src/runtime/asm_amd64.s:1695 +0x1 fp=0xc00008afe8 sp=0xc00008afe0 pc=0x2176d41
created by runtime.gcBgMarkStartWorkers in goroutine 1
	/usr/local/go/src/runtime/mgc.go:1234 +0x1c
goroutine 25 gp=0xc00028f500 m=nil [GC worker (idle)]:
runtime.gopark(0x1701744027c?, 0x0?, 0x0?, 0x0?, 0x0?)
	/usr/local/go/src/runtime/proc.go:402 +0xce fp=0xc00008b750 sp=0xc00008b730 pc=0x213e62e
runtime.gcBgMarkWorker()
	/usr/local/go/src/runtime/mgc.go:1310 +0xe5 fp=0xc00008b7e0 sp=0xc00008b750 pc=0x211e745
runtime.goexit({})
	/usr/local/go/src/runtime/asm_amd64.s:1695 +0x1 fp=0xc00008b7e8 sp=0xc00008b7e0 pc=0x2176d41
created by runtime.gcBgMarkStartWorkers in goroutine 1
	/usr/local/go/src/runtime/mgc.go:1234 +0x1c
goroutine 26 gp=0xc00028f6c0 m=nil [GC worker (idle)]:
runtime.gopark(0x1701744071a?, 0x0?, 0x0?, 0x0?, 0x0?)
	/usr/local/go/src/runtime/proc.go:402 +0xce fp=0xc00008bf50 sp=0xc00008bf30 pc=0x213e62e
runtime.gcBgMarkWorker()
	/usr/local/go/src/runtime/mgc.go:1310 +0xe5 fp=0xc00008bfe0 sp=0xc00008bf50 pc=0x211e745
runtime.goexit({})
	/usr/local/go/src/runtime/asm_amd64.s:1695 +0x1 fp=0xc00008bfe8 sp=0xc00008bfe0 pc=0x2176d41
created by runtime.gcBgMarkStartWorkers in goroutine 1
	/usr/local/go/src/runtime/mgc.go:1234 +0x1c
goroutine 27 gp=0xc00028f880 m=nil [GC worker (idle)]:
runtime.gopark(0x4025000?, 0x1?, 0xbd?, 0xb6?, 0x0?)
	/usr/local/go/src/runtime/proc.go:402 +0xce fp=0xc0003ce750 sp=0xc0003ce730 pc=0x213e62e
runtime.gcBgMarkWorker()
	/usr/local/go/src/runtime/mgc.go:1310 +0xe5 fp=0xc0003ce7e0 sp=0xc0003ce750 pc=0x211e745
runtime.goexit({})
	/usr/local/go/src/runtime/asm_amd64.s:1695 +0x1 fp=0xc0003ce7e8 sp=0xc0003ce7e0 pc=0x2176d41
created by runtime.gcBgMarkStartWorkers in goroutine 1
	/usr/local/go/src/runtime/mgc.go:1234 +0x1c
goroutine 28 gp=0xc00028fa40 m=nil [GC worker (idle)]:
runtime.gopark(0x17017447284?, 0x0?, 0x0?, 0x0?, 0xc0003cf1e8?)
	/usr/local/go/src/runtime/proc.go:402 +0xce fp=0xc0003cef50 sp=0xc0003cef30 pc=0x213e62e
runtime.gcBgMarkWorker()
	/usr/local/go/src/runtime/mgc.go:1310 +0xe5 fp=0xc0003cefe0 sp=0xc0003cef50 pc=0x211e745
runtime.goexit({})
	/usr/local/go/src/runtime/asm_amd64.s:1695 +0x1 fp=0xc0003cefe8 sp=0xc0003cefe0 pc=0x2176d41
created by runtime.gcBgMarkStartWorkers in goroutine 1
	/usr/local/go/src/runtime/mgc.go:1234 +0x1c
goroutine 3 gp=0xc0005028c0 m=nil [select]:
runtime.gopark(0xc0003d1778?, 0x3?, 0xb8?, 0x75?, 0xc0003d1772?)
	/usr/local/go/src/runtime/proc.go:402 +0xce fp=0xc0003d1618 sp=0xc0003d15f8 pc=0x213e62e
runtime.selectgo(0xc0003d1778, 0xc0003d176c, 0xc000324a80?, 0x0, 0x0?, 0x1)
	/usr/local/go/src/runtime/select.go:327 +0x725 fp=0xc0003d1738 sp=0xc0003d1618 pc=0x21504a5
go.opencensus.io/stats/view.(*worker).start(0xc000324a80)
	/go/src/github.com/docker/docker/vendor/go.opencensus.io/stats/view/worker.go:292 +0x9f fp=0xc0003d17c8 sp=0xc0003d1738 pc=0x3acc11f
go.opencensus.io/stats/view.init.0.gowrap1()
	/go/src/github.com/docker/docker/vendor/go.opencensus.io/stats/view/worker.go:34 +0x25 fp=0xc0003d17e0 sp=0xc0003d17c8 pc=0x3acb485
runtime.goexit({})
	/usr/local/go/src/runtime/asm_amd64.s:1695 +0x1 fp=0xc0003d17e8 sp=0xc0003d17e0 pc=0x2176d41
created by go.opencensus.io/stats/view.init.0 in goroutine 1
	/go/src/github.com/docker/docker/vendor/go.opencensus.io/stats/view/worker.go:34 +0x8d
goroutine 52 gp=0xc0005d4380 m=nil [select]:
runtime.gopark(0xc0003cbd90?, 0x2?, 0xa0?, 0xbe?, 0xc0003cbd64?)
	/usr/local/go/src/runtime/proc.go:402 +0xce fp=0xc00052dc10 sp=0xc00052dbf0 pc=0x213e62e
runtime.selectgo(0xc00052dd90, 0xc0003cbd60, 0x200000000?, 0x0, 0xc0003cbdb8?, 0x1)
	/usr/local/go/src/runtime/select.go:327 +0x725 fp=0xc00052dd30 sp=0xc00052dc10 pc=0x21504a5
io.(*pipe).read(0xc0005cede0, {0xc0006da000, 0x10000, 0x210e31e?})
	/usr/local/go/src/io/pipe.go:57 +0xa5 fp=0xc00052ddc0 sp=0xc00052dd30 pc=0x21c40a5
io.(*PipeReader).Read(0xc0003cbe08?, {0xc0006da000?, 0x4025000?, 0xc0003cbe90?})
	/usr/local/go/src/io/pipe.go:134 +0x1a fp=0xc00052ddf0 sp=0xc00052ddc0 pc=0x21c47fa
bufio.(*Scanner).Scan(0xc00052df28)
	/usr/local/go/src/bufio/scan.go:219 +0x81e fp=0xc00052dec8 sp=0xc00052ddf0 pc=0x228c35e
github.com/sirupsen/logrus.(*Entry).writerScanner(0xc000396e00, 0xc0005cede0, 0xc000584c50)
	/go/src/github.com/docker/docker/vendor/github.com/sirupsen/logrus/writer.go:86 +0x11d fp=0xc00052dfb8 sp=0xc00052dec8 pc=0x236ddbd
github.com/sirupsen/logrus.(*Entry).WriterLevel.gowrap1()
	/go/src/github.com/docker/docker/vendor/github.com/sirupsen/logrus/writer.go:57 +0x28 fp=0xc00052dfe0 sp=0xc00052dfb8 pc=0x236dc68
runtime.goexit({})
	/usr/local/go/src/runtime/asm_amd64.s:1695 +0x1 fp=0xc00052dfe8 sp=0xc00052dfe0 pc=0x2176d41
created by github.com/sirupsen/logrus.(*Entry).WriterLevel in goroutine 1
	/go/src/github.com/docker/docker/vendor/github.com/sirupsen/logrus/writer.go:57 +0x31f
goroutine 51 gp=0xc000603c00 m=nil [select]:
runtime.gopark(0xc000531d90?, 0x2?, 0x38?, 0x1c?, 0xc000531d64?)
	/usr/local/go/src/runtime/proc.go:402 +0xce fp=0xc000531c10 sp=0xc000531bf0 pc=0x213e62e
runtime.selectgo(0xc000531d90, 0xc000531d60, 0x0?, 0x0, 0xc0003d1db8?, 0x1)
	/usr/local/go/src/runtime/select.go:327 +0x725 fp=0xc000531d30 sp=0xc000531c10 pc=0x21504a5
io.(*pipe).read(0xc0005ced80, {0xc00051a1b5, 0xfe4b, 0x0?})
	/usr/local/go/src/io/pipe.go:57 +0xa5 fp=0xc000531dc0 sp=0xc000531d30 pc=0x21c40a5
io.(*PipeReader).Read(0xc00051a000?, {0xc00051a1b5?, 0xc000531e88?, 0xc000531e90?})
	/usr/local/go/src/io/pipe.go:134 +0x1a fp=0xc000531df0 sp=0xc000531dc0 pc=0x21c47fa
bufio.(*Scanner).Scan(0xc000531f28)
	/usr/local/go/src/bufio/scan.go:219 +0x81e fp=0xc000531ec8 sp=0xc000531df0 pc=0x228c35e
github.com/sirupsen/logrus.(*Entry).writerScanner(0xc000396e00, 0xc0005ced80, 0xc000584c40)
	/go/src/github.com/docker/docker/vendor/github.com/sirupsen/logrus/writer.go:86 +0x11d fp=0xc000531fb8 sp=0xc000531ec8 pc=0x236ddbd
github.com/sirupsen/logrus.(*Entry).WriterLevel.gowrap1()
	/go/src/github.com/docker/docker/vendor/github.com/sirupsen/logrus/writer.go:57 +0x28 fp=0xc000531fe0 sp=0xc000531fb8 pc=0x236dc68
runtime.goexit({})
	/usr/local/go/src/runtime/asm_amd64.s:1695 +0x1 fp=0xc000531fe8 sp=0xc000531fe0 pc=0x2176d41
created by github.com/sirupsen/logrus.(*Entry).WriterLevel in goroutine 1
	/go/src/github.com/docker/docker/vendor/github.com/sirupsen/logrus/writer.go:57 +0x31f
goroutine 66 gp=0xc000007a40 m=8 mp=0xc00023b808 [syscall, locked to thread]:
syscall.Syscall6(0xf7, 0x1, 0xa1, 0xc00079be08, 0x1000004, 0x0, 0x0)
	/usr/local/go/src/syscall/syscall_linux.go:91 +0x39 fp=0xc00079bdd0 sp=0xc00079bd70 pc=0x21d0ff9
os.(*Process).blockUntilWaitable(0xc00072c120)
	/usr/local/go/src/os/wait_waitid.go:32 +0x76 fp=0xc00079bea8 sp=0xc00079bdd0 pc=0x21ff9b6
os.(*Process).wait(0xc00072c120)
	/usr/local/go/src/os/exec_unix.go:22 +0x25 fp=0xc00079bf08 sp=0xc00079bea8 pc=0x21f8cc5
os.(*Process).Wait(...)
	/usr/local/go/src/os/exec.go:134
os/exec.(*Cmd).Wait(0xc0001f2f00)
	/usr/local/go/src/os/exec/exec.go:906 +0x45 fp=0xc00079bf68 sp=0xc00079bf08 pc=0x27052c5
github.com/docker/docker/libcontainerd/supervisor.(*remote).startContainerd.func1()
	/go/src/github.com/docker/docker/libcontainerd/supervisor/remote_daemon.go:201 +0xbc fp=0xc00079bfe0 sp=0xc00079bf68 pc=0x3d9477c
runtime.goexit({})
	/usr/local/go/src/runtime/asm_amd64.s:1695 +0x1 fp=0xc00079bfe8 sp=0xc00079bfe0 pc=0x2176d41
created by github.com/docker/docker/libcontainerd/supervisor.(*remote).startContainerd in goroutine 53
	/go/src/github.com/docker/docker/libcontainerd/supervisor/remote_daemon.go:182 +0x51b
goroutine 50 gp=0xc000007c00 m=nil [select]:
runtime.gopark(0xc00009fd90?, 0x2?, 0x38?, 0xfc?, 0xc00009fd64?)
	/usr/local/go/src/runtime/proc.go:402 +0xce fp=0xc00009fc10 sp=0xc00009fbf0 pc=0x213e62e
runtime.selectgo(0xc00009fd90, 0xc00009fd60, 0x200000000?, 0x0, 0xc00008ddb8?, 0x1)
	/usr/local/go/src/runtime/select.go:327 +0x725 fp=0xc00009fd30 sp=0xc00009fc10 pc=0x21504a5
io.(*pipe).read(0xc0005ced20, {0xc0006ca970, 0xf690, 0x210e200?})
	/usr/local/go/src/io/pipe.go:57 +0xa5 fp=0xc00009fdc0 sp=0xc00009fd30 pc=0x21c40a5
io.(*PipeReader).Read(0xc0006ca7bb?, {0xc0006ca970?, 0xc00009fe88?, 0xc00009fe90?})
	/usr/local/go/src/io/pipe.go:134 +0x1a fp=0xc00009fdf0 sp=0xc00009fdc0 pc=0x21c47fa
bufio.(*Scanner).Scan(0xc00009ff28)
	/usr/local/go/src/bufio/scan.go:219 +0x81e fp=0xc00009fec8 sp=0xc00009fdf0 pc=0x228c35e
github.com/sirupsen/logrus.(*Entry).writerScanner(0xc000396e00, 0xc0005ced20, 0xc000584c30)
	/go/src/github.com/docker/docker/vendor/github.com/sirupsen/logrus/writer.go:86 +0x11d fp=0xc00009ffb8 sp=0xc00009fec8 pc=0x236ddbd
github.com/sirupsen/logrus.(*Entry).WriterLevel.gowrap1()
	/go/src/github.com/docker/docker/vendor/github.com/sirupsen/logrus/writer.go:57 +0x28 fp=0xc00009ffe0 sp=0xc00009ffb8 pc=0x236dc68
runtime.goexit({})
	/usr/local/go/src/runtime/asm_amd64.s:1695 +0x1 fp=0xc00009ffe8 sp=0xc00009ffe0 pc=0x2176d41
created by github.com/sirupsen/logrus.(*Entry).WriterLevel in goroutine 1
	/go/src/github.com/docker/docker/vendor/github.com/sirupsen/logrus/writer.go:57 +0x31f
goroutine 53 gp=0xc0005a2e00 m=nil [select]:
runtime.gopark(0xc00070f630?, 0x2?, 0x8?, 0x0?, 0xc00070f584?)
	/usr/local/go/src/runtime/proc.go:402 +0xce fp=0xc00070f430 sp=0xc00070f410 pc=0x213e62e
runtime.selectgo(0xc00070f630, 0xc00070f580, 0x5eb980?, 0x0, 0xd0?, 0x1)
	/usr/local/go/src/runtime/select.go:327 +0x725 fp=0xc00070f550 sp=0xc00070f430 pc=0x21504a5
google.golang.org/grpc.(*pickerWrapper).pick(0xc0001756a0, {0xd5bd20, 0xc0005eb980}, 0x0, {{0x962ad6?, 0xc0005eb980?}, {0xd5bd20?, 0xc0005eb980?}})
	/go/src/github.com/docker/docker/vendor/google.golang.org/grpc/picker_wrapper.go:120 +0x166 fp=0xc00070f6a0 sp=0xc00070f550 pc=0x298b446
google.golang.org/grpc.(*ClientConn).getTransport(0x0?, {0xd5bd20?, 0xc0005eb980?}, 0x1b?, {0x962ad6?, 0x210c2bf?})
	/go/src/github.com/docker/docker/vendor/google.golang.org/grpc/clientconn.go:1050 +0x2f fp=0xc00070f6f0 sp=0xc00070f6a0 pc=0x2985e0f
google.golang.org/grpc.(*csAttempt).getTransport(0xc0005f1e10)
	/go/src/github.com/docker/docker/vendor/google.golang.org/grpc/stream.go:468 +0x45 fp=0xc00070f730 sp=0xc00070f6f0 pc=0x299fa85
google.golang.org/grpc.newClientStreamWithParams.func2(0xc0005f1e10)
	/go/src/github.com/docker/docker/vendor/google.golang.org/grpc/stream.go:351 +0x25 fp=0xc00070f750 sp=0xc00070f730 pc=0x299f065
google.golang.org/grpc.(*clientStream).withRetry(0xc0005bb680, 0xc000585e50, 0xc00070f8e0)
	/go/src/github.com/docker/docker/vendor/google.golang.org/grpc/stream.go:789 +0x13a fp=0xc00070f7c0 sp=0xc00070f750 pc=0x29a0cda
google.golang.org/grpc.newClientStreamWithParams({0xd5bd90, 0xc000397a40}, 0x3fb44a0, 0xc0005a7008, {0x962ad6, 0x1c}, {0x0, 0x0, 0x0, 0x0, ...}, ...)
	/go/src/github.com/docker/docker/vendor/google.golang.org/grpc/stream.go:363 +0xb9d fp=0xc00070f938 sp=0xc00070f7c0 pc=0x299eb1d
google.golang.org/grpc.newClientStream.func3({0xd5bd90?, 0xc000397a40?}, 0xc000397a40?)
	/go/src/github.com/docker/docker/vendor/google.golang.org/grpc/stream.go:220 +0x87 fp=0xc00070f9c8 sp=0xc00070f938 pc=0x299dec7
google.golang.org/grpc.newClientStream({0xd5bd90, 0xc000397a40}, 0x3fb44a0, 0xc0005a7008, {0x962ad6, 0x1c}, {0xc0007927e0, 0x6, 0x60?})
	/go/src/github.com/docker/docker/vendor/google.golang.org/grpc/stream.go:255 +0x783 fp=0xc00070fb30 sp=0xc00070f9c8 pc=0x299d8a3
google.golang.org/grpc.invoke({0xd5bd90?, 0xc000397a40?}, {0x962ad6?, 0x53e320?}, {0x66cce0, 0xc0005e3140}, {0x66ce60, 0xc0005eb890}, 0x10?, {0xc0007927e0, ...})
	/go/src/github.com/docker/docker/vendor/google.golang.org/grpc/call.go:66 +0x77 fp=0xc00070fb98 sp=0xc00070fb30 pc=0x2980517
github.com/moby/buildkit/util/grpcerrors.UnaryClientInterceptor({0xd5bd90?, 0xc000397a40?}, {0x962ad6?, 0xc000600008?}, {0x66cce0?, 0xc0005e3140?}, {0x66ce60?, 0xc0005eb890?}, 0xc00070fca0?, 0xa3d538, ...)
	/go/src/github.com/docker/docker/vendor/github.com/moby/buildkit/util/grpcerrors/intercept.go:41 +0x7e fp=0xc00070fc18 sp=0xc00070fb98 pc=0x2effb1e
google.golang.org/grpc.(*ClientConn).Invoke(0xc0005a7008, {0xd5bd90?, 0xc000397a40?}, {0x962ad6?, 0x1c?}, {0x66cce0?, 0xc0005e3140?}, {0x66ce60?, 0xc0005eb890?}, {0xc000175a00, ...})
	/go/src/github.com/docker/docker/vendor/google.golang.org/grpc/call.go:35 +0x205 fp=0xc00070fcb0 sp=0xc00070fc18 pc=0x29803e5
google.golang.org/grpc/health/grpc_health_v1.(*healthClient).Check(0xc000585df0, {0xd5bd90, 0xc000397a40}, 0xc0005e3140, {0xc000585e00, 0x1, 0xd5bd58?})
	/go/src/github.com/docker/docker/vendor/google.golang.org/grpc/health/grpc_health_v1/health_grpc.pb.go:90 +0x167 fp=0xc00070fd40 sp=0xc00070fcb0 pc=0x2e1a5c7
github.com/containerd/containerd.(*Client).IsServing(0xc0005ee500, {0xd5bd90, 0xc000397a40})
	/go/src/github.com/docker/docker/vendor/github.com/containerd/containerd/client.go:259 +0x122 fp=0xc00070fdb8 sp=0xc00070fd40 pc=0x2eafb82
github.com/docker/docker/libcontainerd/supervisor.(*remote).monitorDaemon(0xc000586488, {0xd5bd58, 0xc000114fa0})
	/go/src/github.com/docker/docker/libcontainerd/supervisor/remote_daemon.go:322 +0x8df fp=0xc00070ffb8 sp=0xc00070fdb8 pc=0x3d9517f
github.com/docker/docker/libcontainerd/supervisor.Start.gowrap1()
	/go/src/github.com/docker/docker/libcontainerd/supervisor/remote_daemon.go:99 +0x28 fp=0xc00070ffe0 sp=0xc00070ffb8 pc=0x3d93948
runtime.goexit({})
	/usr/local/go/src/runtime/asm_amd64.s:1695 +0x1 fp=0xc00070ffe8 sp=0xc00070ffe0 pc=0x2176d41
created by github.com/docker/docker/libcontainerd/supervisor.Start in goroutine 1
	/go/src/github.com/docker/docker/libcontainerd/supervisor/remote_daemon.go:99 +0x4ef
goroutine 54 gp=0xc0005a2fc0 m=nil [select]:
runtime.gopark(0xc0003cc760?, 0x2?, 0x0?, 0x0?, 0xc0003cc724?)
	/usr/local/go/src/runtime/proc.go:402 +0xce fp=0xc0003cc5c8 sp=0xc0003cc5a8 pc=0x213e62e
runtime.selectgo(0xc0003cc760, 0xc0003cc720, 0xc0002b9140?, 0x0, 0xc0003cc740?, 0x1)
	/usr/local/go/src/runtime/select.go:327 +0x725 fp=0xc0003cc6e8 sp=0xc0003cc5c8 pc=0x21504a5
google.golang.org/grpc/internal/grpcsync.(*CallbackSerializer).run(0xc0005853c0, {0xd5bd58, 0xc000115180})
	/go/src/github.com/docker/docker/vendor/google.golang.org/grpc/internal/grpcsync/callback_serializer.go:88 +0x115 fp=0xc0003cc7b8 sp=0xc0003cc6e8 pc=0x28cf995
google.golang.org/grpc/internal/grpcsync.NewCallbackSerializer.gowrap1()
	/go/src/github.com/docker/docker/vendor/google.golang.org/grpc/internal/grpcsync/callback_serializer.go:52 +0x28 fp=0xc0003cc7e0 sp=0xc0003cc7b8 pc=0x28cf768
runtime.goexit({})
	/usr/local/go/src/runtime/asm_amd64.s:1695 +0x1 fp=0xc0003cc7e8 sp=0xc0003cc7e0 pc=0x2176d41
created by google.golang.org/grpc/internal/grpcsync.NewCallbackSerializer in goroutine 53
	/go/src/github.com/docker/docker/vendor/google.golang.org/grpc/internal/grpcsync/callback_serializer.go:52 +0x11a
goroutine 55 gp=0xc0005a3180 m=nil [select]:
runtime.gopark(0xc0007a5f60?, 0x2?, 0xe8?, 0xc4?, 0xc0007a5f24?)
	/usr/local/go/src/runtime/proc.go:402 +0xce fp=0xc0007a5dc8 sp=0xc0007a5da8 pc=0x213e62e
runtime.selectgo(0xc0007a5f60, 0xc0007a5f20, 0xc0003df8d0?, 0x0, 0x0?, 0x1)
	/usr/local/go/src/runtime/select.go:327 +0x725 fp=0xc0007a5ee8 sp=0xc0007a5dc8 pc=0x21504a5
google.golang.org/grpc/internal/grpcsync.(*CallbackSerializer).run(0xc0005853f0, {0xd5bd58, 0xc0001151d0})
	/go/src/github.com/docker/docker/vendor/google.golang.org/grpc/internal/grpcsync/callback_serializer.go:88 +0x115 fp=0xc0007a5fb8 sp=0xc0007a5ee8 pc=0x28cf995
google.golang.org/grpc/internal/grpcsync.NewCallbackSerializer.gowrap1()
	/go/src/github.com/docker/docker/vendor/google.golang.org/grpc/internal/grpcsync/callback_serializer.go:52 +0x28 fp=0xc0007a5fe0 sp=0xc0007a5fb8 pc=0x28cf768
runtime.goexit({})
	/usr/local/go/src/runtime/asm_amd64.s:1695 +0x1 fp=0xc0007a5fe8 sp=0xc0007a5fe0 pc=0x2176d41
created by google.golang.org/grpc/internal/grpcsync.NewCallbackSerializer in goroutine 53
	/go/src/github.com/docker/docker/vendor/google.golang.org/grpc/internal/grpcsync/callback_serializer.go:52 +0x11a
goroutine 56 gp=0xc0005a3340 m=nil [select]:
runtime.gopark(0xc00070bf60?, 0x2?, 0x0?, 0x0?, 0xc00070bf24?)
	/usr/local/go/src/runtime/proc.go:402 +0xce fp=0xc00070bdc8 sp=0xc00070bda8 pc=0x213e62e
runtime.selectgo(0xc00070bf60, 0xc00070bf20, 0x0?, 0x0, 0x0?, 0x1)
	/usr/local/go/src/runtime/select.go:327 +0x725 fp=0xc00070bee8 sp=0xc00070bdc8 pc=0x21504a5
google.golang.org/grpc/internal/grpcsync.(*CallbackSerializer).run(0xc000585420, {0xd5bd58, 0xc000115220})
	/go/src/github.com/docker/docker/vendor/google.golang.org/grpc/internal/grpcsync/callback_serializer.go:88 +0x115 fp=0xc00070bfb8 sp=0xc00070bee8 pc=0x28cf995
google.golang.org/grpc/internal/grpcsync.NewCallbackSerializer.gowrap1()
	/go/src/github.com/docker/docker/vendor/google.golang.org/grpc/internal/grpcsync/callback_serializer.go:52 +0x28 fp=0xc00070bfe0 sp=0xc00070bfb8 pc=0x28cf768
runtime.goexit({})
	/usr/local/go/src/runtime/asm_amd64.s:1695 +0x1 fp=0xc00070bfe8 sp=0xc00070bfe0 pc=0x2176d41
created by google.golang.org/grpc/internal/grpcsync.NewCallbackSerializer in goroutine 53
	/go/src/github.com/docker/docker/vendor/google.golang.org/grpc/internal/grpcsync/callback_serializer.go:52 +0x11a

Reproduce

Start docker with the following command line

dockerd --host=unix:///var/run/dind/docker.sock

Host machine:

Linux[REDACTED].ec2.internal 6.1.115 #1 SMP PREEMPT_DYNAMIC Fri Nov 15 19:15:26 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux

AMI ID: bottlerocket-aws-k8s-1.31-x86_64-v1.27.1-efd46c32

Host OS:

amazon/bottlerocket-aws-k8s-1.31-x86_64-v1.27.1-efd46c32

It is even stranger that somes nodes are able to start dockerd normally, but some don't. We couldn't diagnose any further. Tried to check with killsnoop, but nothing strange is happening. dmesg is also clean on both container and host.

containerd is able to start normally.

Notice: the container running dind has privileged: true on its spec.

Expected behavior

dockerd should start normally. Instead, it immediately crashes.

docker version

Client:
 Version:    27.5.0
 Context:    default
 Debug Mode: false
 Plugins:
  buildx: Docker Buildx (Docker Inc.)
    Version:  v0.19.3
    Path:     /usr/local/libexec/docker/cli-plugins/docker-buildx
  compose: Docker Compose (Docker Inc.)
    Version:  083f676
    Path:     /usr/local/libexec/docker/cli-plugins/docker-compose

Server:
 Containers: 4
  Running: 0
  Paused: 0
  Stopped: 4
 Images: 1
 Server Version: 27.5.0
 Storage Driver: overlay2
  Backing Filesystem: xfs
  Supports d_type: true
  Using metacopy: false
  Native Overlay Diff: true
  userxattr: false
 Logging Driver: json-file
 Cgroup Driver: cgroupfs
 Cgroup Version: 2
 Plugins:
  Volume: local
  Network: bridge host ipvlan macvlan null overlay
  Log: awslogs fluentd gcplogs gelf journald json-file local splunk syslog
 Swarm: inactive
 Runtimes: io.containerd.runc.v2 runc
 Default Runtime: runc
 Init Binary: docker-init
 containerd version: bcc810d6b9066471b0b6fa75f557a15a1cbf31bb
 runc version: v1.2.4-0-g6c52b3f
 init version: de40ad0
 Security Options:
  seccomp
   Profile: builtin
  cgroupns
 Kernel Version: 6.1.115
 Operating System: Alpine Linux v3.21 (containerized)
 OSType: linux
 Architecture: x86_64
 CPUs: 8
 Total Memory: 20.02GiB
 Name: [REDACTED]-68f5497c89-qdzgh
 ID: 5b1e2919-14a0-4eb4-9819-e733dba36e5b
 Docker Root Dir: /var/lib/docker
 Debug Mode: false
 Experimental: false
 Insecure Registries:
  127.0.0.0/8
 Registry Mirrors:
  https://[REDACTED]/docker/
 Live Restore Enabled: false
 Product License: Community Engine

docker info

docker version
Client:
 Version:           27.5.0
 API version:       1.47
 Go version:        go1.22.10
 Git commit:        a187fa5
 Built:             Mon Jan 13 15:23:50 2025
 OS/Arch:           linux/amd64
 Context:           default

Server: Docker Engine - Community
 Engine:
  Version:          27.5.0
  API version:      1.47 (minimum version 1.24)
  Go version:       go1.22.10
  Git commit:       38b84dc
  Built:            Mon Jan 13 15:25:13 2025
  OS/Arch:          linux/amd64
  Experimental:     false
 containerd:
  Version:          v1.7.25
  GitCommit:        bcc810d6b9066471b0b6fa75f557a15a1cbf31bb
 runc:
  Version:          1.2.4
  GitCommit:        v1.2.4-0-g6c52b3f
 docker-init:
  Version:          0.19.0
  GitCommit:        de40ad0

Additional Info

No response

@heyvito heyvito added kind/bug Bugs are bugs. The cause may or may not be known at triage time so debugging may be needed. status/0-triage labels Jan 16, 2025
@AkihiroSuda AkihiroSuda changed the title SIGSEGV: segmentation violation code=0x2 addr=0xc001302ce9 pc=0x2986024] on dind SIGSEGV: segmentation violation code=0x2 addr=0xc001302ce9 pc=0x2986024] on dind (grpc.(*ClientConn).resolveNow) Jan 16, 2025
@heyvito
Copy link
Author

heyvito commented Jan 16, 2025

Interesting find:

Leaving dockerd to start containerd causes the issue:

INFO[2025-01-16T16:49:04.140065034Z] Starting up
DEBU[2025-01-16T16:49:04.140611620Z] Listener created for HTTP on unix (/var/run/docker.sock)
INFO[2025-01-16T16:49:04.140633987Z] containerd not running, starting managed containerd
INFO[2025-01-16T16:49:04.140906330Z] containerd is still running                   module=libcontainerd pid=24
DEBU[2025-01-16T16:49:04.141357486Z] created containerd monitoring client          address=/var/run/docker/containerd/containerd.sock module=libcontainerd
DEBU[2025-01-16T16:49:04.141542637Z] 2025/01/16 16:49:04 WARNING: [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "/var/run/docker/containerd/containerd.sock", ServerName: "localhost", Attributes: {"<%!p(networktype.keyType=grpc.internal.transport.networktype)>": "unix" }, }. Err: connection error: desc = "transport: Error while dialing: dial unix /var/run/docker/containerd/containerd.sock: connect: no such file or directory"  library=grpc
unexpected fault address ...

However, starting containerd manually, and then starting dockerd does not cause the issue. For instance:

/ # containerd &
---- SNIP 8< ----

/ # dockerd &
---- SNIP 8< ----

/ # docker ps
CONTAINER ID   IMAGE     COMMAND   CREATED   STATUS    PORTS     NAMES

/ # docker pull alpine
docker pull alpine
Using default tag: latest
latest: Pulling from library/alpine
1f3e46996e29: Pull complete
Digest: sha256:56fa17d2a7e7f168a043a2712e63aed1f8543aeafdcee47c58dcffe38ed51099
Status: Downloaded newer image for alpine:latest
docker.io/library/alpine:latest

/ #

Update: Changed the dind container to:

      containers:
        - command:
            - /bin/ash
          args:
            - -c
            - (containerd&) && sleep 10 && docker-init -- dockerd --host=unix:///var/run/docker.sock
          image: docker:27.5.0-dind
          imagePullPolicy: IfNotPresent
          name: dind
          securityContext:
            privileged: true
          volumeMounts:
            - mountPath: /var/run
              name: dind-sock
      volumes:
        - emptyDir: {}
          name: dind-sock

Volumes in the example above are used by the application to connect to the docker sidecar.

@heyvito
Copy link
Author

heyvito commented Jan 16, 2025

Update: The workaround works intermittently, panicking at the same callsite from the original post.

Full log
time="2025-01-16T18:12:20.141857521Z" level=info msg="starting containerd" revision=bcc810d6b9066471b0b6fa75f557a15a1cbf31bb version=v1.7.25
time="2025-01-16T18:12:20.165805556Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
time="2025-01-16T18:12:20.165828827Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
time="2025-01-16T18:12:20.166013371Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
time="2025-01-16T18:12:20.166030991Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
time="2025-01-16T18:12:20.166122603Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
time="2025-01-16T18:12:20.166137683Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
time="2025-01-16T18:12:20.166147704Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
time="2025-01-16T18:12:20.166154544Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
time="2025-01-16T18:12:20.166234525Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
time="2025-01-16T18:12:20.166452290Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
time="2025-01-16T18:12:20.167186455Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.1.119\\n\"): skip plugin" type=io.containerd.snapshotter.v1
time="2025-01-16T18:12:20.167208046Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
time="2025-01-16T18:12:20.167420350Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
time="2025-01-16T18:12:20.167454031Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
time="2025-01-16T18:12:20.167469741Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
time="2025-01-16T18:12:20.167581754Z" level=info msg="metadata content store policy set" policy=shared
time="2025-01-16T18:12:20.174230003Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
time="2025-01-16T18:12:20.174286415Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
time="2025-01-16T18:12:20.174305065Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
time="2025-01-16T18:12:20.174320325Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
time="2025-01-16T18:12:20.174330785Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
time="2025-01-16T18:12:20.174532340Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
time="2025-01-16T18:12:20.174920908Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
time="2025-01-16T18:12:20.175142353Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
time="2025-01-16T18:12:20.175174183Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
time="2025-01-16T18:12:20.175206474Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
time="2025-01-16T18:12:20.175228634Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
time="2025-01-16T18:12:20.175243765Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
time="2025-01-16T18:12:20.175259585Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
time="2025-01-16T18:12:20.175283765Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
time="2025-01-16T18:12:20.175299156Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
time="2025-01-16T18:12:20.175313266Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
time="2025-01-16T18:12:20.175336377Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
time="2025-01-16T18:12:20.175349007Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
time="2025-01-16T18:12:20.175376018Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
time="2025-01-16T18:12:20.175390308Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
time="2025-01-16T18:12:20.175398048Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
time="2025-01-16T18:12:20.175406948Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
time="2025-01-16T18:12:20.175414668Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
time="2025-01-16T18:12:20.175426679Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
time="2025-01-16T18:12:20.175434149Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
time="2025-01-16T18:12:20.175444199Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
time="2025-01-16T18:12:20.175454959Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
time="2025-01-16T18:12:20.175467829Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
time="2025-01-16T18:12:20.175475280Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
time="2025-01-16T18:12:20.175485310Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
time="2025-01-16T18:12:20.175494760Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
time="2025-01-16T18:12:20.175513330Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
time="2025-01-16T18:12:20.175546281Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
time="2025-01-16T18:12:20.175565471Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
time="2025-01-16T18:12:20.175573132Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
time="2025-01-16T18:12:20.175629353Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
time="2025-01-16T18:12:20.175644923Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
time="2025-01-16T18:12:20.175652013Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
time="2025-01-16T18:12:20.175661933Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
time="2025-01-16T18:12:20.175677994Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
time="2025-01-16T18:12:20.175692294Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
time="2025-01-16T18:12:20.175709285Z" level=info msg="NRI interface is disabled by configuration."
time="2025-01-16T18:12:20.175732595Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1
time="2025-01-16T18:12:20.175845857Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[BinaryName: CriuImagePath: CriuPath: CriuWorkPath: IoGid:0 IoUid:0 NoNewKeyring:false NoPivotRoot:false Root: ShimCgroup: SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}"
time="2025-01-16T18:12:20.175891098Z" level=info msg="Connect containerd service"
time="2025-01-16T18:12:20.175920329Z" level=info msg="using legacy CRI server"
time="2025-01-16T18:12:20.175933899Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this"
time="2025-01-16T18:12:20.176020501Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\""
time="2025-01-16T18:12:20.176388619Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config"
time="2025-01-16T18:12:20.176517511Z" level=info msg="Start subscribing containerd event"
time="2025-01-16T18:12:20.176560032Z" level=info msg="Start recovering state"
time="2025-01-16T18:12:20.176664514Z" level=info msg="Start event monitor"
time="2025-01-16T18:12:20.176688835Z" level=info msg="Start snapshots syncer"
time="2025-01-16T18:12:20.176695515Z" level=info msg="Start cni network conf syncer for default"
time="2025-01-16T18:12:20.176715046Z" level=info msg="Start streaming server"
time="2025-01-16T18:12:20.176670795Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc
time="2025-01-16T18:12:20.176901290Z" level=info msg=serving... address=/run/containerd/containerd.sock
time="2025-01-16T18:12:20.176931350Z" level=info msg="containerd successfully booted in 0.035960s"
time="2025-01-16T18:12:30.150375893Z" level=info msg="Starting up"
time="2025-01-16T18:12:30.151228810Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider"
time="2025-01-16T18:12:30.181632682Z" level=info msg="Loading containers: start."
time="2025-01-16T18:12:31.200257248Z" level=info msg="Loading containers: done."
time="2025-01-16T18:12:31.208196035Z" level=info msg="Docker daemon" commit=38b84dc containerd-snapshotter=false storage-driver=overlay2 version=27.5.0
time="2025-01-16T18:12:31.208308617Z" level=info msg="Daemon has completed initialization"
time="2025-01-16T18:12:31.236211606Z" level=info msg="API listen on /var/run/docker.sock"
unexpected fault address 0xc001b354e9
fatal error: fault
[signal SIGSEGV: segmentation violation code=0x2 addr=0xc001b354e9 pc=0x2986024]

goroutine 341 gp=0xc0007bae00 m=16 mp=0xc000581808 [running]:
runtime.throw({0x9000fb?, 0x0?})
/usr/local/go/src/runtime/panic.go:1023 +0x5c fp=0xc000abdbe8 sp=0xc000abdbb8 pc=0x213b5dc
runtime.sigpanic()
/usr/local/go/src/runtime/signal_unix.go:895 +0x285 fp=0xc000abdc48 sp=0xc000abdbe8 pc=0x2154285
google.golang.org/grpc.(*ClientConn).resolveNow(0xc000dd9808, {})
/go/src/github.com/docker/docker/vendor/google.golang.org/grpc/clientconn.go:1081 +0x44 fp=0xc000abdc70 sp=0xc000abdc48 pc=0x2986024
google.golang.org/grpc.(*addrConn).createTransport.func1(0x80?)
/go/src/github.com/docker/docker/vendor/google.golang.org/grpc/clientconn.go:1362 +0x11f fp=0xc000abdce8 sp=0xc000abdc70 pc=0x2987c9f
google.golang.org/grpc/internal/transport.(*http2Client).Close(0xc0009de008, {0xd330a0, 0xc00094e000})
/go/src/github.com/docker/docker/vendor/google.golang.org/grpc/internal/transport/http2_client.go:1001 +0x1a9 fp=0xc000abde78 sp=0xc000abdce8 pc=0x2938149
google.golang.org/grpc/internal/transport.(*http2Client).reader(0xc0009de008, 0xc0003886c0)
/go/src/github.com/docker/docker/vendor/google.golang.org/grpc/internal/transport/http2_client.go:1672 +0x845 fp=0xc000abdfc0 sp=0xc000abde78 pc=0x293d525
google.golang.org/grpc/internal/transport.newHTTP2Client.gowrap4()
/go/src/github.com/docker/docker/vendor/google.golang.org/grpc/internal/transport/http2_client.go:412 +0x25 fp=0xc000abdfe0 sp=0xc000abdfc0 pc=0x2932d05

Full log with --debug flag
time="2025-01-16T18:17:14.591592914Z" level=info msg="starting containerd" revision=bcc810d6b9066471b0b6fa75f557a15a1cbf31bb version=v1.7.25
time="2025-01-16T18:17:14.665807549Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
time="2025-01-16T18:17:14.665833119Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
time="2025-01-16T18:17:14.666022154Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
time="2025-01-16T18:17:14.666041964Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
time="2025-01-16T18:17:14.666104005Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
time="2025-01-16T18:17:14.666117836Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
time="2025-01-16T18:17:14.666127436Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
time="2025-01-16T18:17:14.666135876Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
time="2025-01-16T18:17:14.666214507Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
time="2025-01-16T18:17:14.666442392Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
time="2025-01-16T18:17:14.667168418Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.1.119\\n\"): skip plugin" type=io.containerd.snapshotter.v1
time="2025-01-16T18:17:14.667189358Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
time="2025-01-16T18:17:14.667292970Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
time="2025-01-16T18:17:14.667306141Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
time="2025-01-16T18:17:14.667317351Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
time="2025-01-16T18:17:14.667414983Z" level=info msg="metadata content store policy set" policy=shared
time="2025-01-16T18:17:14.674764908Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
time="2025-01-16T18:17:14.674819940Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
time="2025-01-16T18:17:14.674849020Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
time="2025-01-16T18:17:14.674868390Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
time="2025-01-16T18:17:14.674888161Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
time="2025-01-16T18:17:14.675069385Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
time="2025-01-16T18:17:14.675399802Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
time="2025-01-16T18:17:14.675576556Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
time="2025-01-16T18:17:14.675593086Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
time="2025-01-16T18:17:14.675601846Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
time="2025-01-16T18:17:14.675610216Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
time="2025-01-16T18:17:14.675620637Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
time="2025-01-16T18:17:14.675627977Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
time="2025-01-16T18:17:14.675637897Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
time="2025-01-16T18:17:14.675647337Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
time="2025-01-16T18:17:14.675654817Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
time="2025-01-16T18:17:14.675662377Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
time="2025-01-16T18:17:14.675669327Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
time="2025-01-16T18:17:14.675681948Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
time="2025-01-16T18:17:14.675693058Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
time="2025-01-16T18:17:14.675700618Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
time="2025-01-16T18:17:14.675707958Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
time="2025-01-16T18:17:14.675717908Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
time="2025-01-16T18:17:14.675725409Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
time="2025-01-16T18:17:14.675735009Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
time="2025-01-16T18:17:14.675742869Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
time="2025-01-16T18:17:14.675750899Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
time="2025-01-16T18:17:14.675761969Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
time="2025-01-16T18:17:14.675771400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
time="2025-01-16T18:17:14.675778040Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
time="2025-01-16T18:17:14.675805830Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
time="2025-01-16T18:17:14.675818481Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
time="2025-01-16T18:17:14.675834901Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
time="2025-01-16T18:17:14.675852551Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
time="2025-01-16T18:17:14.675860031Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
time="2025-01-16T18:17:14.675902542Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
time="2025-01-16T18:17:14.675917803Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
time="2025-01-16T18:17:14.675924373Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
time="2025-01-16T18:17:14.675931293Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
time="2025-01-16T18:17:14.675937473Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
time="2025-01-16T18:17:14.675944523Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
time="2025-01-16T18:17:14.675953233Z" level=info msg="NRI interface is disabled by configuration."
time="2025-01-16T18:17:14.675959304Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1
time="2025-01-16T18:17:14.676035485Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[BinaryName: CriuImagePath: CriuPath: CriuWorkPath: IoGid:0 IoUid:0 NoNewKeyring:false NoPivotRoot:false Root: ShimCgroup: SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}"
time="2025-01-16T18:17:14.676069386Z" level=info msg="Connect containerd service"
time="2025-01-16T18:17:14.676088976Z" level=info msg="using legacy CRI server"
time="2025-01-16T18:17:14.676096367Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this"
time="2025-01-16T18:17:14.676157318Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\""
time="2025-01-16T18:17:14.676484885Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config"
time="2025-01-16T18:17:14.676621798Z" level=info msg="Start subscribing containerd event"
time="2025-01-16T18:17:14.676677949Z" level=info msg="Start recovering state"
time="2025-01-16T18:17:14.676725970Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc
time="2025-01-16T18:17:14.676777171Z" level=info msg="Start event monitor"
time="2025-01-16T18:17:14.676800502Z" level=info msg=serving... address=/run/containerd/containerd.sock
time="2025-01-16T18:17:14.676808182Z" level=info msg="Start snapshots syncer"
time="2025-01-16T18:17:14.676832902Z" level=info msg="Start cni network conf syncer for default"
time="2025-01-16T18:17:14.676849683Z" level=info msg="Start streaming server"
time="2025-01-16T18:17:14.676868713Z" level=info msg="containerd successfully booted in 0.085753s"
time="2025-01-16T18:17:24.600261946Z" level=info msg="Starting up"
time="2025-01-16T18:17:24.601131124Z" level=debug msg="Listener created for HTTP on unix (/var/run/docker.sock)"
time="2025-01-16T18:17:24.601437481Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider"
time="2025-01-16T18:17:24.664696245Z" level=debug msg="Golang's threads limit set to 225720"
time="2025-01-16T18:17:24.665065632Z" level=debug msg="metrics API listening on /var/run/docker/metrics.sock"
time="2025-01-16T18:17:24.668133737Z" level=debug msg="Using default logging driver json-file"
time="2025-01-16T18:17:24.668402853Z" level=debug msg="No quota support for local volumes in /var/lib/docker/volumes: Filesystem does not support, or has not enabled quotas"
time="2025-01-16T18:17:24.668419784Z" level=debug msg="processing event stream" module=libcontainerd namespace=plugins.moby
time="2025-01-16T18:17:24.680210081Z" level=debug msg="[graphdriver] priority list: [overlay2 fuse-overlayfs btrfs zfs vfs]"
time="2025-01-16T18:17:24.697378284Z" level=debug msg="successfully detected metacopy status" storage-driver=overlay2 usingMetacopy=false
time="2025-01-16T18:17:24.700201693Z" level=debug msg="backingFs=xfs, projectQuotaSupported=false, usingMetacopy=false, indexOff=\"index=off,\", userxattr=\"\"" storage-driver=overlay2
time="2025-01-16T18:17:24.700226074Z" level=debug msg="Initialized graph driver overlay2"
time="2025-01-16T18:17:24.702950591Z" level=debug msg="Max Concurrent Downloads: 3"
time="2025-01-16T18:17:24.702965681Z" level=debug msg="Max Concurrent Uploads: 5"
time="2025-01-16T18:17:24.702970621Z" level=debug msg="Max Download Attempts: 5"
time="2025-01-16T18:17:24.703022493Z" level=info msg="Loading containers: start."
time="2025-01-16T18:17:24.703065483Z" level=debug msg="Option DefaultDriver: bridge"
time="2025-01-16T18:17:24.703080194Z" level=debug msg="Option DefaultNetwork: bridge"
time="2025-01-16T18:17:24.703108894Z" level=debug msg="Network Control Plane MTU: 1500"
time="2025-01-16T18:17:24.703319109Z" level=debug msg="processing event stream" module=libcontainerd namespace=moby
time="2025-01-16T18:17:24.705464854Z" level=debug msg="unable to initialize firewalld; using raw iptables instead" error="Failed to connect to D-Bus system bus: dial unix /var/run/dbus/system_bus_socket: connect: no such file or directory"
time="2025-01-16T18:17:24.706610388Z" level=debug msg="/usr/sbin/iptables, [--wait -t filter -C FORWARD -j DOCKER-ISOLATION]"
time="2025-01-16T18:17:24.707506657Z" level=debug msg="/usr/sbin/iptables, [--wait -t nat -D PREROUTING -m addrtype --dst-type LOCAL -j DOCKER]"
time="2025-01-16T18:17:24.708496428Z" level=debug msg="/usr/sbin/iptables, [--wait -t nat -D OUTPUT -m addrtype --dst-type LOCAL ! --dst 127.0.0.0/8 -j DOCKER]"
time="2025-01-16T18:17:24.709478478Z" level=debug msg="/usr/sbin/iptables, [--wait -t nat -D OUTPUT -m addrtype --dst-type LOCAL -j DOCKER]"
time="2025-01-16T18:17:24.710359747Z" level=debug msg="/usr/sbin/iptables, [--wait -t nat -D PREROUTING]"
time="2025-01-16T18:17:24.711194895Z" level=debug msg="/usr/sbin/iptables, [--wait -t nat -D OUTPUT]"
time="2025-01-16T18:17:24.712039383Z" level=debug msg="/usr/sbin/iptables, [--wait -t nat -F DOCKER]"
time="2025-01-16T18:17:24.712916141Z" level=debug msg="/usr/sbin/iptables, [--wait -t nat -X DOCKER]"
time="2025-01-16T18:17:24.713828770Z" level=debug msg="/usr/sbin/iptables, [--wait -t filter -F DOCKER]"
time="2025-01-16T18:17:24.714604277Z" level=debug msg="/usr/sbin/iptables, [--wait -t filter -X DOCKER]"
time="2025-01-16T18:17:24.759073825Z" level=debug msg="/usr/sbin/iptables, [--wait -t filter -F DOCKER-ISOLATION-STAGE-1]"
time="2025-01-16T18:17:24.760684459Z" level=debug msg="/usr/sbin/iptables, [--wait -t filter -X DOCKER-ISOLATION-STAGE-1]"
time="2025-01-16T18:17:24.761473246Z" level=debug msg="/usr/sbin/iptables, [--wait -t filter -F DOCKER-ISOLATION-STAGE-2]"
time="2025-01-16T18:17:24.762291743Z" level=debug msg="/usr/sbin/iptables, [--wait -t filter -X DOCKER-ISOLATION-STAGE-2]"
time="2025-01-16T18:17:24.763100030Z" level=debug msg="/usr/sbin/iptables, [--wait -t filter -F DOCKER-ISOLATION]"
time="2025-01-16T18:17:24.763943738Z" level=debug msg="/usr/sbin/iptables, [--wait -t filter -X DOCKER-ISOLATION]"
time="2025-01-16T18:17:24.764898008Z" level=debug msg="/usr/sbin/iptables, [--wait -t nat -n -L DOCKER]"
time="2025-01-16T18:17:24.765737626Z" level=debug msg="/usr/sbin/iptables, [--wait -t nat -N DOCKER]"
time="2025-01-16T18:17:24.766571503Z" level=debug msg="/usr/sbin/iptables, [--wait -t filter -n -L DOCKER]"
time="2025-01-16T18:17:24.767392410Z" level=debug msg="/usr/sbin/iptables, [--wait -t filter -N DOCKER]"
time="2025-01-16T18:17:24.768311190Z" level=debug msg="/usr/sbin/iptables, [--wait -t filter -n -L DOCKER-ISOLATION-STAGE-1]"
time="2025-01-16T18:17:24.769246019Z" level=debug msg="/usr/sbin/iptables, [--wait -t filter -N DOCKER-ISOLATION-STAGE-1]"
time="2025-01-16T18:17:24.770044506Z" level=debug msg="/usr/sbin/iptables, [--wait -t filter -n -L DOCKER-ISOLATION-STAGE-2]"
time="2025-01-16T18:17:24.771032997Z" level=debug msg="/usr/sbin/iptables, [--wait -t filter -N DOCKER-ISOLATION-STAGE-2]"
time="2025-01-16T18:17:24.771909005Z" level=debug msg="/usr/sbin/iptables, [--wait -t filter -C DOCKER-ISOLATION-STAGE-1 -j RETURN]"
time="2025-01-16T18:17:24.772816765Z" level=debug msg="/usr/sbin/iptables, [--wait -A DOCKER-ISOLATION-STAGE-1 -j RETURN]"
time="2025-01-16T18:17:24.773722213Z" level=debug msg="/usr/sbin/iptables, [--wait -t filter -C DOCKER-ISOLATION-STAGE-2 -j RETURN]"
time="2025-01-16T18:17:24.774655063Z" level=debug msg="/usr/sbin/iptables, [--wait -A DOCKER-ISOLATION-STAGE-2 -j RETURN]"
time="2025-01-16T18:17:24.775818328Z" level=debug msg="/usr/sbin/iptables, [--wait -t nat -C DOCKER -i loopback0 -d 127.0.0.0/8 -j RETURN]"
time="2025-01-16T18:17:24.776790118Z" level=debug msg="/usr/sbin/ip6tables, [--wait -t filter -n -L FORWARD]"
time="2025-01-16T18:17:24.777833300Z" level=debug msg="Modules already loaded" modules="[ip6_tables]"
time="2025-01-16T18:17:24.777854770Z" level=debug msg="/usr/sbin/ip6tables, [--wait -t filter -C FORWARD -j DOCKER-ISOLATION]"
time="2025-01-16T18:17:24.778642807Z" level=debug msg="/usr/sbin/ip6tables, [--wait -t nat -D PREROUTING -m addrtype --dst-type LOCAL -j DOCKER]"
time="2025-01-16T18:17:24.779822872Z" level=debug msg="/usr/sbin/ip6tables, [--wait -t nat -D OUTPUT -m addrtype --dst-type LOCAL ! --dst ::1/128 -j DOCKER]"
time="2025-01-16T18:17:24.780733321Z" level=debug msg="/usr/sbin/ip6tables, [--wait -t nat -D OUTPUT -m addrtype --dst-type LOCAL -j DOCKER]"
time="2025-01-16T18:17:24.781593829Z" level=debug msg="/usr/sbin/ip6tables, [--wait -t nat -D PREROUTING]"
time="2025-01-16T18:17:24.782430517Z" level=debug msg="/usr/sbin/ip6tables, [--wait -t nat -D OUTPUT]"
time="2025-01-16T18:17:24.858824739Z" level=debug msg="/usr/sbin/ip6tables, [--wait -t nat -F DOCKER]"
time="2025-01-16T18:17:24.859730598Z" level=debug msg="/usr/sbin/ip6tables, [--wait -t nat -X DOCKER]"
time="2025-01-16T18:17:24.860800300Z" level=debug msg="/usr/sbin/ip6tables, [--wait -t filter -F DOCKER]"
time="2025-01-16T18:17:24.861690749Z" level=debug msg="/usr/sbin/ip6tables, [--wait -t filter -X DOCKER]"
time="2025-01-16T18:17:24.862544437Z" level=debug msg="/usr/sbin/ip6tables, [--wait -t filter -F DOCKER-ISOLATION-STAGE-1]"
time="2025-01-16T18:17:24.863411865Z" level=debug msg="/usr/sbin/ip6tables, [--wait -t filter -X DOCKER-ISOLATION-STAGE-1]"
time="2025-01-16T18:17:24.864432187Z" level=debug msg="/usr/sbin/ip6tables, [--wait -t filter -F DOCKER-ISOLATION-STAGE-2]"
time="2025-01-16T18:17:24.865328036Z" level=debug msg="/usr/sbin/ip6tables, [--wait -t filter -X DOCKER-ISOLATION-STAGE-2]"
time="2025-01-16T18:17:24.866135962Z" level=debug msg="/usr/sbin/ip6tables, [--wait -t filter -F DOCKER-ISOLATION]"
time="2025-01-16T18:17:24.866891648Z" level=debug msg="/usr/sbin/ip6tables, [--wait -t filter -X DOCKER-ISOLATION]"
time="2025-01-16T18:17:24.867715496Z" level=debug msg="/usr/sbin/ip6tables, [--wait -t nat -n -L DOCKER]"
time="2025-01-16T18:17:24.868765308Z" level=debug msg="/usr/sbin/ip6tables, [--wait -t nat -N DOCKER]"
time="2025-01-16T18:17:24.869591215Z" level=debug msg="/usr/sbin/ip6tables, [--wait -t filter -n -L DOCKER]"
time="2025-01-16T18:17:24.870453064Z" level=debug msg="/usr/sbin/ip6tables, [--wait -t filter -N DOCKER]"
time="2025-01-16T18:17:24.871340812Z" level=debug msg="/usr/sbin/ip6tables, [--wait -t filter -n -L DOCKER-ISOLATION-STAGE-1]"
time="2025-01-16T18:17:24.872181890Z" level=debug msg="/usr/sbin/ip6tables, [--wait -t filter -N DOCKER-ISOLATION-STAGE-1]"
time="2025-01-16T18:17:24.872996027Z" level=debug msg="/usr/sbin/ip6tables, [--wait -t filter -n -L DOCKER-ISOLATION-STAGE-2]"
time="2025-01-16T18:17:24.873788934Z" level=debug msg="/usr/sbin/ip6tables, [--wait -t filter -N DOCKER-ISOLATION-STAGE-2]"
time="2025-01-16T18:17:24.874614602Z" level=debug msg="/usr/sbin/ip6tables, [--wait -t filter -C DOCKER-ISOLATION-STAGE-1 -j RETURN]"
time="2025-01-16T18:17:24.875507071Z" level=debug msg="/usr/sbin/ip6tables, [--wait -A DOCKER-ISOLATION-STAGE-1 -j RETURN]"
time="2025-01-16T18:17:24.876405650Z" level=debug msg="/usr/sbin/ip6tables, [--wait -t filter -C DOCKER-ISOLATION-STAGE-2 -j RETURN]"
time="2025-01-16T18:17:24.877336859Z" level=debug msg="/usr/sbin/ip6tables, [--wait -A DOCKER-ISOLATION-STAGE-2 -j RETURN]"
time="2025-01-16T18:17:24.878285199Z" level=debug msg="/usr/sbin/ip6tables, [--wait -t filter -P FORWARD DROP]"
time="2025-01-16T18:17:25.055199670Z" level=debug msg="/usr/sbin/iptables, [--wait -t filter -n -L DOCKER-USER]"
time="2025-01-16T18:17:25.056394625Z" level=debug msg="/usr/sbin/iptables, [--wait -t filter -N DOCKER-USER]"
time="2025-01-16T18:17:25.057294824Z" level=debug msg="/usr/sbin/iptables, [--wait -t filter -C DOCKER-USER -j RETURN]"
time="2025-01-16T18:17:25.058292175Z" level=debug msg="/usr/sbin/iptables, [--wait -A DOCKER-USER -j RETURN]"
time="2025-01-16T18:17:25.059271425Z" level=debug msg="/usr/sbin/iptables, [--wait -t filter -C FORWARD -j DOCKER-USER]"
time="2025-01-16T18:17:25.060123443Z" level=debug msg="/usr/sbin/iptables, [--wait -I FORWARD -j DOCKER-USER]"
time="2025-01-16T18:17:25.060958661Z" level=debug msg="/usr/sbin/ip6tables, [--wait -t filter -n -L DOCKER-USER]"
time="2025-01-16T18:17:25.062022654Z" level=debug msg="/usr/sbin/ip6tables, [--wait -t filter -N DOCKER-USER]"
time="2025-01-16T18:17:25.062890832Z" level=debug msg="/usr/sbin/ip6tables, [--wait -t filter -C DOCKER-USER -j RETURN]"
time="2025-01-16T18:17:25.063776441Z" level=debug msg="/usr/sbin/ip6tables, [--wait -A DOCKER-USER -j RETURN]"
time="2025-01-16T18:17:25.064860134Z" level=debug msg="/usr/sbin/ip6tables, [--wait -t filter -C FORWARD -j DOCKER-USER]"
time="2025-01-16T18:17:25.065775233Z" level=debug msg="/usr/sbin/ip6tables, [--wait -I FORWARD -j DOCKER-USER]"
time="2025-01-16T18:17:25.071454043Z" level=debug msg="/usr/sbin/iptables, [--wait -t filter -n -L DOCKER-USER]"
time="2025-01-16T18:17:25.072852152Z" level=debug msg="/usr/sbin/iptables, [--wait -t filter -C DOCKER-USER -j RETURN]"
time="2025-01-16T18:17:25.073943735Z" level=debug msg="/usr/sbin/iptables, [--wait -t filter -C FORWARD -j DOCKER-USER]"
time="2025-01-16T18:17:25.074942506Z" level=debug msg="/usr/sbin/iptables, [--wait -D FORWARD -j DOCKER-USER]"
time="2025-01-16T18:17:25.159115642Z" level=debug msg="/usr/sbin/iptables, [--wait -I FORWARD -j DOCKER-USER]"
time="2025-01-16T18:17:25.160484141Z" level=debug msg="/usr/sbin/ip6tables, [--wait -t filter -n -L DOCKER-USER]"
time="2025-01-16T18:17:25.161639465Z" level=debug msg="/usr/sbin/ip6tables, [--wait -t filter -C DOCKER-USER -j RETURN]"
time="2025-01-16T18:17:25.162570055Z" level=debug msg="/usr/sbin/ip6tables, [--wait -t filter -C FORWARD -j DOCKER-USER]"
time="2025-01-16T18:17:25.163514734Z" level=debug msg="/usr/sbin/ip6tables, [--wait -D FORWARD -j DOCKER-USER]"
time="2025-01-16T18:17:25.238933045Z" level=debug msg="/usr/sbin/ip6tables, [--wait -I FORWARD -j DOCKER-USER]"
time="2025-01-16T18:17:25.240083509Z" level=debug msg="Modules already loaded" modules="[nf_conntrack nf_conntrack_netlink]"
time="2025-01-16T18:17:25.240265203Z" level=debug msg="Allocating IPv4 pools for network bridge (4fb35af8caedec1bce277b788231620795b38a38eb3f653390cf83a697b09977)"
time="2025-01-16T18:17:25.240459217Z" level=debug msg="RequestPool: {AddressSpace:LocalDefault Pool: SubPool: Options:map[] Exclude:[169.254.172.1/32] V6:false}"
time="2025-01-16T18:17:25.240492928Z" level=debug msg="RequestAddress(LocalDefault/172.17.0.0/16, , map[RequestAddressType:com.docker.network.gateway])"
time="2025-01-16T18:17:25.240516288Z" level=debug msg="Request address PoolID:172.17.0.0/16 Bits: 65536, Unselected: 65534, Sequence: (0x80000000, 1)->(0x0, 2046)->(0x1, 1)->end Curr:0 Serial:false PrefAddress:invalid IP "
time="2025-01-16T18:17:25.240636181Z" level=debug msg="Did not find any interface with name docker0: Link not found"
time="2025-01-16T18:17:25.240656611Z" level=debug msg="Setting bridge mac address to 02:42:e5:1a:83:e2"
time="2025-01-16T18:17:25.241290325Z" level=debug msg="Assigning address to bridge interface docker0: 172.17.0.1/16"
time="2025-01-16T18:17:25.241376476Z" level=debug msg="/usr/sbin/iptables, [--wait -t nat -C POSTROUTING -s 172.17.0.0/16 ! -o docker0 -j MASQUERADE]"
time="2025-01-16T18:17:25.242445969Z" level=debug msg="/usr/sbin/iptables, [--wait -t nat -I POSTROUTING -s 172.17.0.0/16 ! -o docker0 -j MASQUERADE]"
time="2025-01-16T18:17:25.243428680Z" level=debug msg="/usr/sbin/iptables, [--wait -t nat -C DOCKER -i docker0 -j RETURN]"
time="2025-01-16T18:17:25.244432941Z" level=debug msg="/usr/sbin/iptables, [--wait -t nat -I DOCKER -i docker0 -j RETURN]"
time="2025-01-16T18:17:25.245367061Z" level=debug msg="/usr/sbin/iptables, [--wait -t nat -C POSTROUTING -m addrtype --src-type LOCAL -o docker0 -j MASQUERADE]"
time="2025-01-16T18:17:25.246362042Z" level=debug msg="/usr/sbin/iptables, [--wait -t filter -C FORWARD -i docker0 -o docker0 -j DROP]"
time="2025-01-16T18:17:25.247287072Z" level=debug msg="/usr/sbin/iptables, [--wait -t filter -C FORWARD -i docker0 -o docker0 -j ACCEPT]"
time="2025-01-16T18:17:25.248163860Z" level=debug msg="/usr/sbin/iptables, [--wait -t filter -I FORWARD -i docker0 -o docker0 -j ACCEPT]"
time="2025-01-16T18:17:25.249100160Z" level=debug msg="/usr/sbin/iptables, [--wait -t filter -C FORWARD -i docker0 ! -o docker0 -j ACCEPT]"
time="2025-01-16T18:17:25.250032470Z" level=debug msg="/usr/sbin/iptables, [--wait -t filter -I FORWARD -i docker0 ! -o docker0 -j ACCEPT]"
time="2025-01-16T18:17:25.251102972Z" level=debug msg="/usr/sbin/iptables, [--wait -t nat -C PREROUTING -m addrtype --dst-type LOCAL -j DOCKER]"
time="2025-01-16T18:17:25.252062282Z" level=debug msg="/usr/sbin/iptables, [--wait -t nat -A PREROUTING -m addrtype --dst-type LOCAL -j DOCKER]"
time="2025-01-16T18:17:25.253117044Z" level=debug msg="/usr/sbin/iptables, [--wait -t nat -C OUTPUT -m addrtype --dst-type LOCAL -j DOCKER ! --dst 127.0.0.0/8]"
time="2025-01-16T18:17:25.254276759Z" level=debug msg="/usr/sbin/iptables, [--wait -t nat -A OUTPUT -m addrtype --dst-type LOCAL -j DOCKER ! --dst 127.0.0.0/8]"
time="2025-01-16T18:17:25.255262590Z" level=debug msg="/usr/sbin/iptables, [--wait -t filter -C FORWARD -o docker0 -j DOCKER]"
time="2025-01-16T18:17:25.256230360Z" level=debug msg="/usr/sbin/iptables, [--wait -I FORWARD -o docker0 -j DOCKER]"
time="2025-01-16T18:17:25.257081608Z" level=debug msg="/usr/sbin/iptables, [--wait -t filter -C FORWARD -o docker0 -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT]"
time="2025-01-16T18:17:25.258143280Z" level=debug msg="/usr/sbin/iptables, [--wait -I FORWARD -o docker0 -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT]"
time="2025-01-16T18:17:25.259256824Z" level=debug msg="/usr/sbin/iptables, [--wait -t filter -C FORWARD -j DOCKER-ISOLATION-STAGE-1]"
time="2025-01-16T18:17:25.260330876Z" level=debug msg="/usr/sbin/iptables, [--wait -I FORWARD -j DOCKER-ISOLATION-STAGE-1]"
time="2025-01-16T18:17:25.261185584Z" level=debug msg="/usr/sbin/ip6tables, [--wait -t filter -C DOCKER-ISOLATION-STAGE-1 -i docker0 ! -o docker0 -j DOCKER-ISOLATION-STAGE-2]"
time="2025-01-16T18:17:25.262111804Z" level=debug msg="/usr/sbin/ip6tables, [--wait -t filter -I DOCKER-ISOLATION-STAGE-1 -i docker0 ! -o docker0 -j DOCKER-ISOLATION-STAGE-2]"
time="2025-01-16T18:17:25.262941011Z" level=debug msg="/usr/sbin/ip6tables, [--wait -t filter -C DOCKER-ISOLATION-STAGE-2 -o docker0 -j DROP]"
time="2025-01-16T18:17:25.263868041Z" level=debug msg="/usr/sbin/ip6tables, [--wait -t filter -I DOCKER-ISOLATION-STAGE-2 -o docker0 -j DROP]"
time="2025-01-16T18:17:25.264919693Z" level=debug msg="/usr/sbin/iptables, [--wait -t filter -C DOCKER-ISOLATION-STAGE-1 -i docker0 ! -o docker0 -j DOCKER-ISOLATION-STAGE-2]"
time="2025-01-16T18:17:25.266022106Z" level=debug msg="/usr/sbin/iptables, [--wait -t filter -I DOCKER-ISOLATION-STAGE-1 -i docker0 ! -o docker0 -j DOCKER-ISOLATION-STAGE-2]"
time="2025-01-16T18:17:25.266894094Z" level=debug msg="/usr/sbin/iptables, [--wait -t filter -C DOCKER-ISOLATION-STAGE-2 -o docker0 -j DROP]"
time="2025-01-16T18:17:25.267891595Z" level=debug msg="/usr/sbin/iptables, [--wait -t filter -I DOCKER-ISOLATION-STAGE-2 -o docker0 -j DROP]"
time="2025-01-16T18:17:25.277332795Z" level=debug msg="/usr/sbin/iptables, [--wait -t filter -n -L DOCKER-USER]"
time="2025-01-16T18:17:25.278846177Z" level=debug msg="/usr/sbin/iptables, [--wait -t filter -C DOCKER-USER -j RETURN]"
time="2025-01-16T18:17:25.279790867Z" level=debug msg="/usr/sbin/iptables, [--wait -t filter -C FORWARD -j DOCKER-USER]"
time="2025-01-16T18:17:25.281024763Z" level=debug msg="/usr/sbin/iptables, [--wait -D FORWARD -j DOCKER-USER]"
time="2025-01-16T18:17:25.359119499Z" level=debug msg="/usr/sbin/iptables, [--wait -I FORWARD -j DOCKER-USER]"
time="2025-01-16T18:17:25.360488998Z" level=debug msg="/usr/sbin/ip6tables, [--wait -t filter -n -L DOCKER-USER]"
time="2025-01-16T18:17:25.361636523Z" level=debug msg="/usr/sbin/ip6tables, [--wait -t filter -C DOCKER-USER -j RETURN]"
time="2025-01-16T18:17:25.362749986Z" level=debug msg="/usr/sbin/ip6tables, [--wait -t filter -C FORWARD -j DOCKER-USER]"
time="2025-01-16T18:17:25.363595964Z" level=debug msg="/usr/sbin/ip6tables, [--wait -D FORWARD -j DOCKER-USER]"
time="2025-01-16T18:17:25.499083001Z" level=debug msg="/usr/sbin/ip6tables, [--wait -I FORWARD -j DOCKER-USER]"
time="2025-01-16T18:17:25.500425370Z" level=debug msg="/usr/sbin/iptables, [--wait -t filter -n -L DOCKER-USER]"
time="2025-01-16T18:17:25.501419600Z" level=debug msg="/usr/sbin/iptables, [--wait -t filter -C DOCKER-USER -j RETURN]"
time="2025-01-16T18:17:25.502411632Z" level=debug msg="/usr/sbin/iptables, [--wait -t filter -C FORWARD -j DOCKER-USER]"
time="2025-01-16T18:17:25.503368912Z" level=debug msg="/usr/sbin/iptables, [--wait -D FORWARD -j DOCKER-USER]"
time="2025-01-16T18:17:25.579068619Z" level=debug msg="/usr/sbin/iptables, [--wait -I FORWARD -j DOCKER-USER]"
time="2025-01-16T18:17:25.580445648Z" level=debug msg="/usr/sbin/ip6tables, [--wait -t filter -n -L DOCKER-USER]"
time="2025-01-16T18:17:25.581637503Z" level=debug msg="/usr/sbin/ip6tables, [--wait -t filter -C DOCKER-USER -j RETURN]"
time="2025-01-16T18:17:25.582722896Z" level=debug msg="/usr/sbin/ip6tables, [--wait -t filter -C FORWARD -j DOCKER-USER]"
time="2025-01-16T18:17:25.583719096Z" level=debug msg="/usr/sbin/ip6tables, [--wait -D FORWARD -j DOCKER-USER]"
time="2025-01-16T18:17:25.689067769Z" level=debug msg="/usr/sbin/ip6tables, [--wait -I FORWARD -j DOCKER-USER]"
time="2025-01-16T18:17:25.690760975Z" level=info msg="Loading containers: done."
time="2025-01-16T18:17:25.700747715Z" level=info msg="Docker daemon" commit=38b84dc containerd-snapshotter=false storage-driver=overlay2 version=27.5.0
time="2025-01-16T18:17:25.700892378Z" level=info msg="Daemon has completed initialization"
time="2025-01-16T18:17:25.727774465Z" level=debug msg="Registering routers"
time="2025-01-16T18:17:25.727824426Z" level=debug msg="Registering GET, /containers/{name:.*}/checkpoints"
time="2025-01-16T18:17:25.727919548Z" level=debug msg="Registering POST, /containers/{name:.*}/checkpoints"
time="2025-01-16T18:17:25.727976489Z" level=debug msg="Registering DELETE, /containers/{name}/checkpoints/{checkpoint}"
time="2025-01-16T18:17:25.728029780Z" level=debug msg="Registering HEAD, /containers/{name:.*}/archive"
time="2025-01-16T18:17:25.728070611Z" level=debug msg="Registering GET, /containers/json"
time="2025-01-16T18:17:25.728124452Z" level=debug msg="Registering GET, /containers/{name:.*}/export"
time="2025-01-16T18:17:25.728187473Z" level=debug msg="Registering GET, /containers/{name:.*}/changes"
time="2025-01-16T18:17:25.728257124Z" level=debug msg="Registering GET, /containers/{name:.*}/json"
time="2025-01-16T18:17:25.728308646Z" level=debug msg="Registering GET, /containers/{name:.*}/top"
time="2025-01-16T18:17:25.728365327Z" level=debug msg="Registering GET, /containers/{name:.*}/logs"
time="2025-01-16T18:17:25.728433908Z" level=debug msg="Registering GET, /containers/{name:.*}/stats"
time="2025-01-16T18:17:25.728507350Z" level=debug msg="Registering GET, /containers/{name:.*}/attach/ws"
time="2025-01-16T18:17:25.728563351Z" level=debug msg="Registering GET, /exec/{id:.*}/json"
time="2025-01-16T18:17:25.728637212Z" level=debug msg="Registering GET, /containers/{name:.*}/archive"
time="2025-01-16T18:17:25.728738115Z" level=debug msg="Registering POST, /containers/create"
time="2025-01-16T18:17:25.728799026Z" level=debug msg="Registering POST, /containers/{name:.*}/kill"
time="2025-01-16T18:17:25.728842667Z" level=debug msg="Registering POST, /containers/{name:.*}/pause"
time="2025-01-16T18:17:25.728931149Z" level=debug msg="Registering POST, /containers/{name:.*}/unpause"
time="2025-01-16T18:17:25.728985600Z" level=debug msg="Registering POST, /containers/{name:.*}/restart"
time="2025-01-16T18:17:25.729046071Z" level=debug msg="Registering POST, /containers/{name:.*}/start"
time="2025-01-16T18:17:25.729142483Z" level=debug msg="Registering POST, /containers/{name:.*}/stop"
time="2025-01-16T18:17:25.729210885Z" level=debug msg="Registering POST, /containers/{name:.*}/wait"
time="2025-01-16T18:17:25.729293666Z" level=debug msg="Registering POST, /containers/{name:.*}/resize"
time="2025-01-16T18:17:25.729349838Z" level=debug msg="Registering POST, /containers/{name:.*}/attach"
time="2025-01-16T18:17:25.729398169Z" level=debug msg="Registering POST, /containers/{name:.*}/exec"
time="2025-01-16T18:17:25.729432719Z" level=debug msg="Registering POST, /exec/{name:.*}/start"
time="2025-01-16T18:17:25.729470330Z" level=debug msg="Registering POST, /exec/{name:.*}/resize"
time="2025-01-16T18:17:25.729505031Z" level=debug msg="Registering POST, /containers/{name:.*}/rename"
time="2025-01-16T18:17:25.729540282Z" level=debug msg="Registering POST, /containers/{name:.*}/update"
time="2025-01-16T18:17:25.729591673Z" level=debug msg="Registering POST, /containers/prune"
time="2025-01-16T18:17:25.729643444Z" level=debug msg="Registering POST, /commit"
time="2025-01-16T18:17:25.729686665Z" level=debug msg="Registering PUT, /containers/{name:.*}/archive"
time="2025-01-16T18:17:25.729735896Z" level=debug msg="Registering DELETE, /containers/{name:.*}"
time="2025-01-16T18:17:25.729802427Z" level=debug msg="Registering GET, /images/json"
time="2025-01-16T18:17:25.729872509Z" level=debug msg="Registering GET, /images/search"
time="2025-01-16T18:17:25.729930210Z" level=debug msg="Registering GET, /images/get"
time="2025-01-16T18:17:25.729963011Z" level=debug msg="Registering GET, /images/{name:.*}/get"
time="2025-01-16T18:17:25.729999821Z" level=debug msg="Registering GET, /images/{name:.*}/history"
time="2025-01-16T18:17:25.730035172Z" level=debug msg="Registering GET, /images/{name:.*}/json"
time="2025-01-16T18:17:25.730065303Z" level=debug msg="Registering POST, /images/load"
time="2025-01-16T18:17:25.730089403Z" level=debug msg="Registering POST, /images/create"
time="2025-01-16T18:17:25.730150024Z" level=debug msg="Registering POST, /images/{name:.*}/push"
time="2025-01-16T18:17:25.730220376Z" level=debug msg="Registering POST, /images/{name:.*}/tag"
time="2025-01-16T18:17:25.730322618Z" level=debug msg="Registering POST, /images/prune"
time="2025-01-16T18:17:25.730365659Z" level=debug msg="Registering DELETE, /images/{name:.*}"
time="2025-01-16T18:17:25.730420320Z" level=debug msg="Registering OPTIONS, /{anyroute:.*}"
time="2025-01-16T18:17:25.730457811Z" level=debug msg="Registering GET, /_ping"
time="2025-01-16T18:17:25.730500662Z" level=debug msg="Registering HEAD, /_ping"
time="2025-01-16T18:17:25.730544093Z" level=debug msg="Registering GET, /events"
time="2025-01-16T18:17:25.730587274Z" level=debug msg="Registering GET, /info"
time="2025-01-16T18:17:25.730630105Z" level=debug msg="Registering GET, /version"
time="2025-01-16T18:17:25.730684316Z" level=debug msg="Registering GET, /system/df"
time="2025-01-16T18:17:25.730730347Z" level=debug msg="Registering POST, /auth"
time="2025-01-16T18:17:25.730753577Z" level=debug msg="Registering GET, /volumes"
time="2025-01-16T18:17:25.730775888Z" level=debug msg="Registering GET, /volumes/{name:.*}"
time="2025-01-16T18:17:25.730811498Z" level=debug msg="Registering POST, /volumes/create"
time="2025-01-16T18:17:25.730835769Z" level=debug msg="Registering POST, /volumes/prune"
time="2025-01-16T18:17:25.730862660Z" level=debug msg="Registering PUT, /volumes/{name:.*}"
time="2025-01-16T18:17:25.730892460Z" level=debug msg="Registering DELETE, /volumes/{name:.*}"
time="2025-01-16T18:17:25.730924331Z" level=debug msg="Registering POST, /build"
time="2025-01-16T18:17:25.730947761Z" level=debug msg="Registering POST, /build/prune"
time="2025-01-16T18:17:25.730970242Z" level=debug msg="Registering POST, /build/cancel"
time="2025-01-16T18:17:25.730993813Z" level=debug msg="Registering POST, /session"
time="2025-01-16T18:17:25.731015723Z" level=debug msg="Registering POST, /swarm/init"
time="2025-01-16T18:17:25.731038853Z" level=debug msg="Registering POST, /swarm/join"
time="2025-01-16T18:17:25.731060904Z" level=debug msg="Registering POST, /swarm/leave"
time="2025-01-16T18:17:25.731083224Z" level=debug msg="Registering GET, /swarm"
time="2025-01-16T18:17:25.731103905Z" level=debug msg="Registering GET, /swarm/unlockkey"
time="2025-01-16T18:17:25.731128515Z" level=debug msg="Registering POST, /swarm/update"
time="2025-01-16T18:17:25.731151826Z" level=debug msg="Registering POST, /swarm/unlock"
time="2025-01-16T18:17:25.731179026Z" level=debug msg="Registering GET, /services"
time="2025-01-16T18:17:25.731202177Z" level=debug msg="Registering GET, /services/{id}"
time="2025-01-16T18:17:25.731234898Z" level=debug msg="Registering POST, /services/create"
time="2025-01-16T18:17:25.731267458Z" level=debug msg="Registering POST, /services/{id}/update"
time="2025-01-16T18:17:25.731301539Z" level=debug msg="Registering DELETE, /services/{id}"
time="2025-01-16T18:17:25.731333450Z" level=debug msg="Registering GET, /services/{id}/logs"
time="2025-01-16T18:17:25.731381631Z" level=debug msg="Registering GET, /nodes"
time="2025-01-16T18:17:25.731406651Z" level=debug msg="Registering GET, /nodes/{id}"
time="2025-01-16T18:17:25.731446062Z" level=debug msg="Registering DELETE, /nodes/{id}"
time="2025-01-16T18:17:25.731473903Z" level=debug msg="Registering POST, /nodes/{id}/update"
time="2025-01-16T18:17:25.731512053Z" level=debug msg="Registering GET, /tasks"
time="2025-01-16T18:17:25.731554024Z" level=debug msg="Registering GET, /tasks/{id}"
time="2025-01-16T18:17:25.731592555Z" level=debug msg="Registering GET, /tasks/{id}/logs"
time="2025-01-16T18:17:25.731648286Z" level=debug msg="Registering GET, /secrets"
time="2025-01-16T18:17:25.731695847Z" level=debug msg="Registering POST, /secrets/create"
time="2025-01-16T18:17:25.731730248Z" level=debug msg="Registering DELETE, /secrets/{id}"
time="2025-01-16T18:17:25.731760129Z" level=debug msg="Registering GET, /secrets/{id}"
time="2025-01-16T18:17:25.731826330Z" level=debug msg="Registering POST, /secrets/{id}/update"
time="2025-01-16T18:17:25.731868121Z" level=debug msg="Registering GET, /configs"
time="2025-01-16T18:17:25.731909812Z" level=debug msg="Registering POST, /configs/create"
time="2025-01-16T18:17:25.731964343Z" level=debug msg="Registering DELETE, /configs/{id}"
time="2025-01-16T18:17:25.732035844Z" level=debug msg="Registering GET, /configs/{id}"
time="2025-01-16T18:17:25.732095966Z" level=debug msg="Registering POST, /configs/{id}/update"
time="2025-01-16T18:17:25.732146857Z" level=debug msg="Registering GET, /plugins"
time="2025-01-16T18:17:25.732191298Z" level=debug msg="Registering GET, /plugins/{name:.*}/json"
time="2025-01-16T18:17:25.732227939Z" level=debug msg="Registering GET, /plugins/privileges"
time="2025-01-16T18:17:25.732253829Z" level=debug msg="Registering DELETE, /plugins/{name:.*}"
time="2025-01-16T18:17:25.732285340Z" level=debug msg="Registering POST, /plugins/{name:.*}/enable"
time="2025-01-16T18:17:25.732336711Z" level=debug msg="Registering POST, /plugins/{name:.*}/disable"
time="2025-01-16T18:17:25.732384912Z" level=debug msg="Registering POST, /plugins/pull"
time="2025-01-16T18:17:25.732438493Z" level=debug msg="Registering POST, /plugins/{name:.*}/push"
time="2025-01-16T18:17:25.732527335Z" level=debug msg="Registering POST, /plugins/{name:.*}/upgrade"
time="2025-01-16T18:17:25.732597687Z" level=debug msg="Registering POST, /plugins/{name:.*}/set"
time="2025-01-16T18:17:25.732672668Z" level=debug msg="Registering POST, /plugins/create"
time="2025-01-16T18:17:25.732720879Z" level=debug msg="Registering GET, /distribution/{name:.*}/json"
time="2025-01-16T18:17:25.732764790Z" level=debug msg="Registering POST, /grpc"
time="2025-01-16T18:17:25.732788440Z" level=debug msg="Registering GET, /networks"
time="2025-01-16T18:17:25.732817411Z" level=debug msg="Registering GET, /networks/"
time="2025-01-16T18:17:25.732848712Z" level=debug msg="Registering GET, /networks/{id:.+}"
time="2025-01-16T18:17:25.732881822Z" level=debug msg="Registering POST, /networks/create"
time="2025-01-16T18:17:25.732906763Z" level=debug msg="Registering POST, /networks/{id:.*}/connect"
time="2025-01-16T18:17:25.732954924Z" level=debug msg="Registering POST, /networks/{id:.*}/disconnect"
time="2025-01-16T18:17:25.733013215Z" level=debug msg="Registering POST, /networks/prune"
time="2025-01-16T18:17:25.733043286Z" level=debug msg="Registering DELETE, /networks/{id:.*}"
time="2025-01-16T18:17:25.733432034Z" level=info msg="API listen on /var/run/docker.sock"
time="2025-01-16T18:17:25.733673969Z" level=debug msg="Calling HEAD /_ping"
time="2025-01-16T18:17:25.734175890Z" level=debug msg="Calling GET /v1.47/events?filters=%7B%22type%22%3A%7B%22container%22%3Atrue%7D%7D"
time="2025-01-16T18:17:25.734710621Z" level=debug msg="Calling GET /v1.47/containers/json"
time="2025-01-16T18:17:26.236090355Z" level=debug msg="Client context cancelled, stop sending events"
time="2025-01-16T18:17:26.246730330Z" level=debug msg="Calling HEAD /_ping"
time="2025-01-16T18:17:26.372747397Z" level=debug msg="Calling HEAD /_ping"
time="2025-01-16T18:17:26.373230598Z" level=debug msg="Calling POST /grpc"
time="2025-01-16T18:17:26.373804350Z" level=debug msg="Calling HEAD /_ping"
time="2025-01-16T18:17:26.379631632Z" level=debug msg="Calling HEAD /_ping"
time="2025-01-16T18:17:26.380131903Z" level=debug msg="Calling POST /grpc"
time="2025-01-16T18:17:26.380650534Z" level=debug msg="Calling HEAD /_ping"
time="2025-01-16T18:17:26.380894879Z" level=debug msg="Calling GET /v1.47/version"
time="2025-01-16T18:17:26.385759312Z" level=debug msg="Calling POST /grpc"
time="2025-01-16T18:17:26.388828017Z" level=debug msg="Calling POST /session"
unexpected fault address 0xc001a95ce9
fatal error: fault
[signal SIGSEGV: segmentation violation code=0x2 addr=0xc001a95ce9 pc=0x2986024]

goroutine 111 gp=0xc0008a4fc0 m=17 mp=0xc000d80008 [running]:
runtime.throw({0x9000fb?, 0x0?})
/usr/local/go/src/runtime/panic.go:1023 +0x5c fp=0xc000743be8 sp=0xc000743bb8 pc=0x213b5dc
runtime.sigpanic()
/usr/local/go/src/runtime/signal_unix.go:895 +0x285 fp=0xc000743c48 sp=0xc000743be8 pc=0x2154285
google.golang.org/grpc.(*ClientConn).resolveNow(0xc000d3a008, {})
/go/src/github.com/docker/docker/vendor/google.golang.org/grpc/clientconn.go:1081 +0x44 fp=0xc000743c70 sp=0xc000743c48 pc=0x2986024
google.golang.org/grpc.(*addrConn).createTransport.func1(0xdc?)
/go/src/github.com/docker/docker/vendor/google.golang.org/grpc/clientconn.go:1362 +0x11f fp=0xc000743ce8 sp=0xc000743c70 pc=0x2987c9f
google.golang.org/grpc/internal/transport.(*http2Client).Close(0xc000c1a248, {0xd330a0, 0xc00058fc20})
/go/src/github.com/docker/docker/vendor/google.golang.org/grpc/internal/transport/http2_client.go:1001 +0x1a9 fp=0xc000743e78 sp=0xc000743ce8 pc=0x2938149
google.golang.org/grpc/internal/transport.(*http2Client).reader(0xc000c1a248, 0xc000c163c0)
/go/src/github.com/docker/docker/vendor/google.golang.org/grpc/internal/transport/http2_client.go:1672 +0x845 fp=0xc000743fc0 sp=0xc000743e78 pc=0x293d525
google.golang.org/grpc/internal/transport.newHTTP2Client.gowrap4()
/go/src/github.com/docker/docker/vendor/google.golang.org/grpc/internal/transport/http2_client.go:412 +0x25 fp=0xc000743fe0 sp=0xc000743fc0 pc=0x2932d05
runtime.goexit({})
/usr/local/go/src/runtime/asm_amd64.s:1695 +0x1 fp=0xc000743fe8 sp=0xc000743fe0 pc=0x2176d41
created by google.golang.org/grpc/internal/transport.newHTTP2Client in goroutine 331
/go/src/github.com/docker/docker/vendor/google.golang.org/grpc/internal/transport/http2_client.go:412 +0x1e79

@thaJeztah
Copy link
Member

cc @dmcgowan

@Shippe

This comment has been minimized.

@Shippe
Copy link

Shippe commented Jan 22, 2025

Hello guy's

@heyvito
Copy link
Author

heyvito commented Jan 23, 2025

Still trying to diagnose what is happening.

Created a bare-metal K8s cluster, compiled bottlerocket with same version and kernel, and dind was able to boot without issues.
In our production environment, we noticed the following behaviour:

  1. Starting containerd and then dockerd causes the same issue. Moreover, dockerd seems to delete containerd's socket file, even when containerd was started externally.
  2. Starting containerd using the configuration file generated by dockerd succeeds.
  3. Starting containerd using the configuration file generated by a previous dockerd, and starting dockerd providing --debug --containerd=/var/run/docker/containerd/containerd.sock allows dockerd to start and run as expected.

An interesting find was this log message, observed on cases 1 and 2:

time="2025-01-23T16:27:06.659422860Z" level=debug msg="created containerd monitoring client" address=/var/run/docker/containerd/containerd.sock module=libcontainerd
time="2025-01-23T16:27:06.659606751Z" level=debug msg="2025/01/23 16:27:06 WARNING: [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: \"/var/run/docker/containerd/containerd.sock\", ServerName: \"localhost\", Attributes: {\"<%!p(networktype.keyType=grpc.internal.transport.networktype)>\": \"unix\" }, }. Err: connection error: desc = \"transport: Error while dialing: dial unix /var/run/docker/containerd/containerd.sock: connect: no such file or directory\"" library=grpc

I couldn't find anything relevant on strace.

Also, using the latest docker:dind image with 3 replicas resulted in two replicas running normally, one failing with the same error as described in the first post. We also observed a case when dockerd starts normally, but issuing any operation causes it to crash with the same failure. Finally, we also managed to start dind without issues on ARM servers, and were not able to reproduce the issue on that architecture.

@thaJeztah
Copy link
Member

Couple of observations (nothing conclusive, and just related to some things mentioned) from the above;

I see that a mount is configured for /var/run. One thing that's possibly relevant here is that /var/run often is a symlink too /run;

docker run -it --rm docker:27.5.0-dind ls -l /var/run
lrwxrwxrwx    1 root     root             6 Jan  8 11:05 /var/run -> ../run

Depending on how things are configured, tools may use /var/run or /run, which normally (due to the symlink) would be "equivalent";

docker run -it --rm docker:27.5.0-dind sh -c 'ls -l /var/run/ && ls -l /run/'
total 12
drwxr-xr-x    2 root     root          4096 Jan  8 11:05 lock
total 12
drwxr-xr-x    2 root     root          4096 Jan  8 11:05 lock

But it's possible that a mount can result in the symlink being shadowed by the mount. With docker, the symlink is followed when creating the mount inside the container, but I don't know if that's always the case with other runtimes;

mkdir hello && touch hello/world.txt
docker run -it --rm -v ./hello:/var/run docker:27.5.0-dind ls -l /var/run
lrwxrwxrwx    1 root     root             6 Jan  8 11:05 /var/run -> ../run

docker run -it --rm -v ./hello:/var/run docker:27.5.0-dind sh -c 'ls -l /var/run/ && ls -la /run/'
total 4
-rw-r--r--    1 root     root             0 Jan 23 18:25 world.txt
total 4
-rw-r--r--    1 root     root             0 Jan 23 18:25 world.txt

An interesting find was this log message, observed on cases 1 and 2:

time="2025-01-23T16:27:06.659422860Z" level=debug msg="created containerd monitoring client" address=/var/run/docker/containerd/containerd.sock module=libcontainerd
time="2025-01-23T16:27:06.659606751Z" level=debug msg="2025/01/23 16:27:06 WARNING: [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: \"/var/run/docker/containerd/containerd.sock\", ServerName: \"localhost\", Attributes: {\"<%!p(networktype.keyType=grpc.internal.transport.networktype)>\": \"unix\" }, }. Err: connection error: desc = \"transport: Error while dialing: dial unix /var/run/docker/containerd/containerd.sock: connect: no such file or directory\"" library=grpc

When starting dockerd without specifying the --containerd=<path to containerd socket> option, it's possible to end up in a race condition. What happens in that case is that the dockerd daemon tries to detect if containerd is already running by checking if it's able to find the containerd socket in the default location;

moby/cmd/dockerd/daemon.go

Lines 945 to 956 in 441579a

func systemContainerdRunning(honorXDG bool) (string, bool, error) {
addr := containerddefaults.DefaultAddress
if honorXDG {
runtimeDir, err := homedir.GetRuntimeDir()
if err != nil {
return "", false, err
}
addr = filepath.Join(runtimeDir, "containerd", "containerd.sock")
}
_, err := os.Lstat(addr)
return addr, err == nil, nil
}

The default location is taken from;

// DefaultAddress is the default unix socket address
DefaultAddress = "/run/containerd/containerd.sock"

When failing to establish if it's running, it then falls back to starting its own containerd daemon as child-process, using the configuration generated by dockerd;

moby/cmd/dockerd/daemon.go

Lines 1015 to 1027 in 441579a

func (cli *daemonCLI) initializeContainerd(ctx context.Context) (func(time.Duration) error, error) {
systemContainerdAddr, ok, err := systemContainerdRunning(honorXDG)
if err != nil {
return nil, errors.Wrap(err, "could not determine whether the system containerd is running")
}
if ok {
// detected a system containerd at the given address.
cli.ContainerdAddr = systemContainerdAddr
return nil, nil
}
log.G(ctx).Info("containerd not running, starting managed containerd")
opts, err := cli.getContainerdDaemonOpts()

The containerd is still running error you posted in an earlier comment #49285 (comment) could be happening in a situation where;

  • dockerd failed to detect that containerd was running, which could be a race;
    • containerd was running but the socket not yet created
    • containerd was running but the socket in a different location (shadowed by a mount?)
  • therefore dockerd trying to start its own instance
    • but containerd refusing to start because an instance was already running (a pidfile still present to guard against multiple instances being started)

@heyvito
Copy link
Author

heyvito commented Jan 23, 2025

@thaJeztah Thanks for replying!

We could correlate Grafana Beyla being deployed on the cluster with the failure I reported. We could not, however, pinpoint the reason. After several tests, we can say for sure that it is correlated, but I haven't had luck reproducing the issue on my bare-metal cluster (also running Bottlerocket), but it is easily reproducible on our EKS cluster. Beyla's logs shown no relevant information as well.

I'll update this issue in case we discover anything new.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/bug Bugs are bugs. The cause may or may not be known at triage time so debugging may be needed. status/more-info-needed status/0-triage version/27.5
Projects
None yet
Development

No branches or pull requests

3 participants