-
-
Notifications
You must be signed in to change notification settings - Fork 4.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Caddy 2.6.3 high memory usage #5366
Comments
Is the build step that's running out of memory, or the Caddy server itself? Please share the steps to reproduce. If it's the build, share your build process. If it's Caddy itself, fill out the form below. Also, please share the output of This template will ask for some information you've already provided; that's OK, just fill it out the best you can. 👍 I've also included some helpful tips below the template. Feel free to let me know if you have any questions! Thank you again for your report, we look forward to resolving it! Template
Helpful tips
Example of a tutorial: Create a config file: |
1. Environment1a. Operating system and version
1b. Caddy version (run
|
Count | Profile |
---|---|
268 | allocs |
0 | block |
0 | cmdline |
27 | goroutine |
268 | heap |
0 | mutex |
0 | profile |
13 | threadcreate |
0 | trace |
My hunch is Caddy is probably buffering the request and/or response in the reverse-proxy handler due to the changes of #5289. Try disabling buffering by setting |
I have changed the config from
to
and restarted Caddy. It doesn't seem to have an effect. The memory still gets filled up pretty quickly when starting the process again. |
@GerritKopp What is your full, unredacted config? What is the output of Can you please collect a heap profile? That is what we mean when we talk about that debug/pprof endpoint -- not just the counts next to the links :) For example, for a 30 second heap profile, add |
To reproduce the results faster, the server was configured with 16 GB RAM and no swap file. Caddy 2.6.2 had no issues with the same setup with only 8 GB. I tried to collect all the requested metrics. Since Caddy gets killed the heap-delta had to stop earlier. I ran The goroutines dump is from another run performing the same tasks, when the server was nearly maxed out on memory again. All files in an archive: top-output-and-heap-profile.zip Caddyfile (with redacted credentials):
I hope this information is useful. |
Thanks! Yeah that is way more insight than before. Goroutine dump is extremely small -- looks like you're only serving a small handful of requests, but at least one of those requests is causing your server to spiral:
Something is buffering wildly. Can you please do another go at this, but:
What kinds of requests is this server getting? Are clients uploading or downloading large files? What are the headers of the requests like? If you could post the unredacted logs here that would be extremely helpful 😊 Thank you! |
The reverse proxy is buffering the body... (click this image to view large, if it doesn't show up) Here's the relevant code, I think: caddy/modules/caddyhttp/reverseproxy/reverseproxy.go Lines 617 to 628 in 90798f3
These must be chunked requests... |
Ok, culprit is this PR here: #5289 @u5surf Would you be able to help look into this? We will likely need to release a hotfix in v2.6.4 for this. (This is also my oversight in code review, as we should not be buffering chunked requests. The reason they are chunked is because they are large. They should not be buffered.) Edit: Wow, I just noticed that @mohammed90 caught it hours before I did. Sorry that slipped by my scrolling. (The GitHub app is... a bit janky with comments. I'm on my desktop now.) Thank you Mohammed! |
Are more detailed logs still needed? The earliest I could look into the necessary configuration changes to collect them would be on next Monday. |
@GerritKopp Nope, sorry -- I should have let you know that I think we have enough info now for a hotfix. |
No problem. Looking forward for the fix. Thanks! |
I have a proposed patch in #5367. @u5surf Would you be able to take a look? I still want to incorporate your fix but maybe in an opt-in sort of way. Or if we can find a better fix. Apparently not all chunked encodings hang! (see @GerritKopp's use case.) Super bonus points if you could test the patch to see if it works for you, @GerritKopp -- thank you 😊 we will release v2.6.4 right away once verified. |
Unfortunately, I could not test this earlier. The issue is fixed vor me in 2.6.4. 👍 |
Oh good :) Thanks! |
Thank you so much for fixing this @mholt . I had a container go into cashback loops due to resource utilization, and we suspected Caddy, but we weren't sure if it was a UI memory leak that could be causing the issue. I'm happy to share that with Caddy 2.6.4, all the issues went away 🚀 |
Great to hear! Thanks for the feedback. |
We are using Caddy on Ubuntu 20.04 on of our servers as a reverse-proxy for our Docker registry and are regularly pulling and pushing Docker images to and from this registry. The server had initially 8GB of memory and everything worked fine with Caddy 2.6.2.
After updating to Caddy 2.6.3 the server regularly runs out of memory when building and publishing new Docker images via our CI system and Caddy gets killed because of that (see below). This still happens even after we increased the server memory to 16GB + 32GB swap.
A downgrade to 2.6.2 fixes the issue.
Did some configuration defaults change in the new version? Is there a way to limit the memory usage? Is there maybe a memory leak in 2.6.3?
The text was updated successfully, but these errors were encountered: