-
Notifications
You must be signed in to change notification settings - Fork 525
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Autoscale number of modelindexers to increase throughput and ensure full resource usage #9181
Comments
After merging #9318, we have made considerable improvement ( The different distributions are for 1, 2, 4, 8, 15 and 30 gigabytes of RAM APM Servers, in that order. Looking at the APM Server CPU usage metrics, it also looks like while we use more CPU when available (after the change to dedicated goroutine active indexer), we still aren't taking advantage of bigger instances with more CPUs Looking at this metrics, it may be that scaling up the active indexers up to
I think it is a good place to start for autoscaling and keep it simple. Afterwards we could use other metrics to fine tune how autoscaling behaves:
|
To be tested as part of #9182 |
From @marclop 's findings:
Autoscale the number of modelindexers up and down depending on ES and apm agent load.
The text was updated successfully, but these errors were encountered: