You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I know that your setup is perfect for m1.large but I would still like to see a more universal equation than the current one ... so I'll also try and strain my brain to come up with something that factors in permgen etc. before coming up with a max_mem number to use.
Without such a consideration Monit hits the restart if mem_usage > 90% for 15 cycles condition very easily on non m1.large instances like mine.
The text was updated successfully, but these errors were encountered:
Upon closer inspection of elasticsearch.init.erb, I realized that Memory Total is the machine's total memory and not the total memory being consumed by ES. But the top command really does show 92.6% consumption ... and the following heap_max_in_bytes + non_heap_max_in_bytes ... doesn't add up to more than 80% of the available memory.
Sigh ... sorry for bugging you, I'm closing this until I have a proper suggestion/analysis to present. Too bad that I couldn't get JConsole running on the AWS instance yet.
On an AWS
m1.small
instance, the currentmax_mem
calculation of:yields:
So
-Xmx1105m
isn't so bad on am1.small
(1.7GB memory) instance until I runsudo service elasticsearch status -v
and see:I know that your setup is perfect for
m1.large
but I would still like to see a more universal equation than the current one ... so I'll also try and strain my brain to come up with something that factors inpermgen
etc. before coming up with a max_mem number to use.Without such a consideration Monit hits the
restart if mem_usage > 90% for 15 cycles
condition very easily on nonm1.large
instances like mine.The text was updated successfully, but these errors were encountered: