You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Start caddy without any tls but multiple http servers. We have a boilerplate default config (below) and then modify the config at runtime using the admin server.
I believe we didn't try to initially configure caddy until after it started to log the below panic.
We do make a small configuration change as soon as the server starts which is probably what triggered this.
What did you expect to happen, and what actually happened instead?
I expected it to start up correctly.
There was a panic on startup caused by a timer being started with a non-positive interval.
{"level":"info","ts":1739472209.4884524,"msg":"no autosave file exists","autosave_file":"/opt/caddy/autosave.json"}
{"level":"info","ts":1739472209.4886894,"msg":"using provided configuration","config_file":"/opt/caddy/caddy.json","config_adapter":""}
{"level":"error","ts":1739472209.526402,"logger":"tls.cache.maintenance","msg":"panic","cache":"0xc000478180","error":"non-positive interval for NewTicker","stack":"goroutine 12 [running]:\ngithub.com/caddyserver/certmagic.(*Cache).maintainAssets.func1()\n\tgithub.com/caddyserver/[email protected]/maintain.go:48 +0x85\npanic({0x16b4c80?, 0x1e6e450?})\n\truntime/panic.go:914 +0x21f\ntime.NewTicker(0xc0002b4070?)\n\ttime/tick.go:22 +0xe5\ngithub.com/caddyserver/certmagic.(*Cache).maintainAssets(0xc000478180, 0x0)\n\tgithub.com/caddyserver/[email protected]/maintain.go:57 +0x207\ncreated by github.com/caddyserver/certmagic.NewCache in goroutine 1\n\tgithub.com/caddyserver/[email protected]/cache.go:127 +0x1f6\n"}
... repeats several times ...
This doesn't happen every time as its a race condition between (*Cache)SetOptions being called and the maintainAssets goroutine being scheduled.
How do you think this should be fixed?
The (*Cache).SetOptions should do the same thing as NewCache and check for non-positive intervals and set them to their defaults.
Please link to any related issues, pull requests, and/or discussion
It seems like the TLS module in caddy does which I believe to be the only possible place that could have caused this. If SetOptions is called before the maintainAssets is scheduled then it could set the intervals to 0. I haven't figured out why the TLS app is being provisioned but I believe it has something to do with the http app always having a default TLS app.
What version of the package are you using?
v0.19.2
What are you trying to do?
Start caddy without any tls but multiple http servers. We have a boilerplate default config (below) and then modify the config at runtime using the admin server.
What steps did you take?
Started caddy with the following default config:
I believe we didn't try to initially configure caddy until after it started to log the below panic.We do make a small configuration change as soon as the server starts which is probably what triggered this.
What did you expect to happen, and what actually happened instead?
I expected it to start up correctly.
There was a panic on startup caused by a timer being started with a non-positive interval.
This doesn't happen every time as its a race condition between
(*Cache)SetOptions
being called and themaintainAssets
goroutine being scheduled.How do you think this should be fixed?
The
(*Cache).SetOptions
should do the same thing asNewCache
and check for non-positive intervals and set them to their defaults.Please link to any related issues, pull requests, and/or discussion
It seems like the TLS module in caddy does which I believe to be the only possible place that could have caused this. If
SetOptions
is called before themaintainAssets
is scheduled then it could set the intervals to0
. I haven't figured out why the TLS app is being provisioned but I believe it has something to do with the http app always having a default TLS app.Bonus: What do you use CertMagic for, and do you find it useful?
The text was updated successfully, but these errors were encountered: