-
Notifications
You must be signed in to change notification settings - Fork 136
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fixes #20889 - Use event mpm module with Apache #753
Conversation
Two questions (which I am not sure myself about):
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As for the tuning, I think we could even make those the base tuning, if they turn out to be correct.
With the prefork mpm every worker took quite a bit of memory, especially when we had both mod_passenger and mod_wsgi loaded. Nowadays we only have reverse proxying and static file serving so every worker is already very light. For example, using prefork mpm on my machine (no tuning) Apache just takes 15.4 MB of RAM according to systemctl status httpd
.
So I did start to read through https://www.liquidweb.com/kb/apache-performance-tuning-mpm-directives/ and it makes me wonder where the right balance is.
Thanks for the suggestions. I've pushed an update which preserves the mod prefork tunings in case that mpm is used, switches to the event mpm only for EL8, and moves the deployment size invariant parameter values into common.yaml (including the existing prefork tunings, which did not vary). |
Thanks, this makes me think that a good balance is reducing the threadsperchild by a factor of 2 or 4. Vs. what I have now, that would cause a 2-4x increase to the (small) memory footprint of workers while reducing the impact of worker recycling. In this case, the memory consumption should still be significantly smaller compared to the prefork mpm. |
We can also consider turning off default mods: It does mean we need to be careful to include the mods we do use but fewer mods is better for memory usage and security. |
So for example:
In this case we'd start up 16 processes, each with 16 threads, so using @ekohl 's numbers this is a cost of around 250 MB of RAM, which increases the memory footprint at idle vs. prefork but allows handling some load spikes without immediately requiring additional workers to be created. Under full load (64 worker processes) the memory consumption should still be about 16x smaller than the prefork mpm. TODO: |
I think 250 MB out of the box is quite large. It is a LOT more than we do today. Does startservers need to be that high? I'd assume Apache can scale up servers quite quickly if needed. For reference, my Foreman server today has 4 GB so Apache would take about 6% of the servers memory. I don't think this is really a sane default of our default profile. |
It seems odd to me that we didn't previously do anything different with the apache configuration for different tuning sizes, so IMO this is the proper solution. Work on disabling default modules is ongoing. As expected there are breakages so I'm identifying which are required and re-enabling those. ( I will check what the memory footprint per worker is like when the minimal set of modules is enabled, and add reasonable sizing based configurations once that figure is obtained. |
I interpret the previous comments as - we had configuration that worked across all tunings and were a sane default for small on up to large without compromising small tuning for the sake of larger (small/medium being the more common deployments). And that we should not over correct. As that impacts our testing and development environments by potentially requiring more resources than is necessary. I think you could follow up with different values for the larger tuning profiles as we get performance data to support that. And for the initial pass err on the side of values that target our default tuning profile to match what we had before.
Do you have a Redmine for tracking this? |
3a79b5c
to
4852655
Compare
That is indeed what I meant. I'd start conservative with tuning. If Apache can quickly scale up workers and/or threads if needed, I'd start with a low number by default. I'd also check to see how many backend connections we have available. If Puma (for Foreman) and gunicorn (for Pulp) only have a limited number, you only really need to add workers for the additional static files (which I think at the moment are very few, basically only |
4852655
to
0c7c592
Compare
0c7c592
to
21da6ef
Compare
KATELLO 4.4 (
(Key metrics: 9 processes, 9 threads, 14.2M Memory) KATELLO NIGHTLY (including prior
(Key metrics: 3 processes, 37 threads, 13.2M Memory) |
I've updated this with the intention of more closely matching the previous behavior. The thread count is still higher than before, but within the same order of magnitude and as you can see the memory usage is actually lower at idle. The reason for the higher thread count is that mpm_event uses worker recycling to prevent memory from growing without bound; rather than get into a situation where Note that |
@@ -1,4 +1,8 @@ | |||
--- | |||
apache::mod::event::serverlimit: 64 | |||
apache::mod::event::maxrequestworkers: 1024 | |||
apache::mod::event::maxrequestsperchild: 4000 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This seems quite a jump from default to medium, what is your thinking there?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If the question is about ServerLimit
and MaxRequestWorkers
, this tuning profile already has apache::mod::prefork::serverlimit: 1024
which allows up to 1024 single threaded httpd worker processes.
For event_mpm case, ServerLimit * ThreadsPerChild = MaxRequestWorkers
and we have ThreadsPerChild = 16
, so matching the # of threads we have already for mpm_prefork implies ServerLimit * 16 = 1024
, so ServerLimit =64
If the question is about MaxRequestsPerChild
, the default is 0 which disables worker recycling entirely. We previously added apache::mod::prefork::maxrequestsperchild: 4000
to protect against memory leaks, so matching this value with the event mpm makes sense because threads are able to share memory within each worker process and this is a per worker setting.
I think it's a likely oversight that MaxRequestsPerChild
was previously not set in the default profile -- if we are concerned about the possibility of memory leaks then we should protect against them in the default profile as well. If you agree, I'll push an update adding that to the default as well.
I was also surprised to find that the same limits are already used in medium through extra-extra-large deployment sizes, and was tempted to actually scale them up and down with the profile sizes, but opted instead to heed the previous advice about changing too much at one time.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Appreciate the detailed explanation, that all makes sense. I wouldn't worry about updating prefork configuration - we made it this far and EL7 support is set to be removed really soon.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
FWIW, now that we don't run mod_passenger
and mod_wsgi
inside Apache, I'm less concerned about memory leaks and we could see whether increasing MaxRequestsPerChild
or even setting it to 0
would make sense.
But obviously, this doesn't belong into THIS PR ;-)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@evgeni you previously commented - mind having a look?
@@ -1,4 +1,8 @@ | |||
--- | |||
# EL8 only due to issues with older Apache event mpm on EL7. When EL7 | |||
# support is dropped, this should move to tuning/common.yaml | |||
apache::mpm_module: 'event' |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Would setting apache::mpm_module: 'event'
in config/foreman.hiera/common.yaml
(or tuning/common.yaml
, idk what's better) and then apache::mpm_module: 'prefork'
in config/foreman.hiera/family/RedHat-7.yaml
get us the same result?
The benefit would be that this then also would apply to Debian/Ubuntu installations.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm cool with postponing the Debian part to a later PR.
I like this idea as it also sets the future base state we want and makes
EL7 the odd duck which will be dropped soon.
…On Fri, Apr 29, 2022, 5:11 AM Evgeni Golov ***@***.***> wrote:
***@***.**** commented on this pull request.
------------------------------
In config/foreman.hiera/family/RedHat-8.yaml
<#753 (comment)>
:
> @@ -1,4 +1,8 @@
---
+# EL8 only due to issues with older Apache event mpm on EL7. When EL7
+# support is dropped, this should move to tuning/common.yaml
+apache::mpm_module: 'event'
Would setting apache::mpm_module: 'event' in
config/foreman.hiera/common.yaml (or tuning/common.yaml, idk what's
better) and then apache::mpm_module: 'prefork' in
config/foreman.hiera/family/RedHat-7.yaml get us the same result?
The benefit would be that this then also would apply to Debian/Ubuntu
installations.
—
Reply to this email directly, view it on GitHub
<#753 (review)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AACHT47OYSB7MOOVCCRYCZTVHORTZANCNFSM5P6XTQXQ>
.
You are receiving this because you were mentioned.Message ID:
***@***.***>
|
21da6ef
to
bf4ef4f
Compare
Great idea. I've added |
ci failure is unrelated, blame @evgeni! theforeman/puppet-foreman_proxy#748 |
bf4ef4f
to
d837f45
Compare
d837f45
to
1ca9b3c
Compare
[test foreman-installer] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks a ton @wbclark!
Some considerations:
There are issues with the older httpd on EL7 that make mpm_event perform worse than it should
This required updates to tuning profiles, and interestingly the prefork values were the same across all tuning sizes. Those apache directives have been updated here with more modern equivalents for mpm_event but these values haven't yet been thoroughly benchmarked