-
Notifications
You must be signed in to change notification settings - Fork 521
Cron job not finishing since latest upgrade #1369
Comments
Hi @hostingnuggets , Are you able to share the contents of |
Commit to prevent multiple instances of cron job: |
I am seeing this as well, but only on systems upgraded from 14.0.4.
Brant Hale
… On Nov 14, 2018, at 10:50 AM, Hosting Nuggets ***@***.***> wrote:
Hello,
Since I upgraded SO 2 days ago with soup on my Unbutu 16.04 LTS sensor it starts a cron job at around 15:11 which seems to never end and gets started again and again. The result is that the load on the server increases and the processes also until to the point that I need to reboot it.
Is there anything wrong with the latest upgrade?
Here is an extract of the relevant part of a ps -ef showing these many processes:
root 12734 940 0 15:11 ? 00:00:00 /usr/sbin/CRON -f
root 12738 12734 0 15:11 ? 00:00:00 /bin/sh -c /usr/sbin/so-elastic-configure-curator-close > /dev/null 2>&1; /usr/sbin/so-curator-closed-delete > /dev/null 2>&1; docker exec so-curator curator --config /etc/curator/config/curator.yml /etc/curator/action/close.yml > /de
root 12765 1 0 12:05 ? 00:00:00 su - sguil -- /usr/bin/snort_agent.tcl -c /etc/nsm/sos1-ens15f1/snort_agent-1.conf
sguil 12768 12765 0 12:05 ? 00:00:00 tclsh /usr/bin/snort_agent.tcl -c /etc/nsm/sos1-ens15f1/snort_agent-1.conf
sguil 12769 12768 0 12:05 ? 00:00:00 tail -n 1 -f /nsm/sensor_data/sos1-ens15f1/snort-1.stats
root 12772 12738 0 15:11 ? 00:00:03 /bin/bash /usr/sbin/so-curator-closed-delete
root 13310 940 0 14:13 ? 00:00:00 /usr/sbin/CRON -f
root 13315 13310 0 14:13 ? 00:00:00 /bin/sh -c /usr/sbin/so-elastic-configure-curator-close > /dev/null 2>&1; /usr/sbin/so-curator-closed-delete > /dev/null 2>&1; docker exec so-curator curator --config /etc/curator/config/curator.yml /etc/curator/action/close.yml > /de
root 13345 13315 0 14:13 ? 00:00:33 /bin/bash /usr/sbin/so-curator-closed-delete
root 13401 940 0 15:17 ? 00:00:00 /usr/sbin/CRON -f
root 13416 13401 0 15:17 ? 00:00:00 /bin/sh -c /usr/sbin/so-elastic-configure-curator-close > /dev/null 2>&1; /usr/sbin/so-curator-closed-delete > /dev/null 2>&1; docker exec so-curator curator --config /etc/curator/config/curator.yml /etc/curator/action/close.yml > /de
root 13468 13416 0 15:17 ? 00:00:02 /bin/bash /usr/sbin/so-curator-closed-delete
root 14561 940 0 15:05 ? 00:00:00 /usr/sbin/CRON -f
root 14580 14561 0 15:05 ? 00:00:00 /bin/sh -c /usr/sbin/so-elastic-configure-curator-close > /dev/null 2>&1; /usr/sbin/so-curator-closed-delete > /dev/null 2>&1; docker exec so-curator curator --config /etc/curator/config/curator.yml /etc/curator/action/close.yml > /de
root 14651 14580 0 15:05 ? 00:00:04 /bin/bash /usr/sbin/so-curator-closed-delete
root 15145 940 0 15:33 ? 00:00:00 /usr/sbin/CRON -f
root 15148 15145 0 15:33 ? 00:00:00 /bin/sh -c /usr/sbin/so-elastic-configure-curator-close > /dev/null 2>&1; /usr/sbin/so-curator-closed-delete > /dev/null 2>&1; docker exec so-curator curator --config /etc/curator/config/curator.yml /etc/curator/action/close.yml > /de
root 15180 15148 0 15:33 ? 00:00:00 /bin/bash /usr/sbin/so-curator-closed-delete
root 15227 2 0 15:32 ? 00:00:00 [kworker/u32:4]
root 15252 940 0 15:23 ? 00:00:00 /usr/sbin/CRON -f
root 15258 15252 0 15:23 ? 00:00:00 /bin/sh -c /usr/sbin/so-elastic-configure-curator-close > /dev/null 2>&1; /usr/sbin/so-curator-closed-delete > /dev/null 2>&1; docker exec so-curator curator --config /etc/curator/config/curator.yml /etc/curator/action/close.yml > /de
root 15286 15258 0 15:23 ? 00:00:01 /bin/bash /usr/sbin/so-curator-closed-delete
root 15582 940 0 15:28 ? 00:00:00 /usr/sbin/CRON -f
root 15606 15582 0 15:28 ? 00:00:00 /bin/sh -c /usr/sbin/so-elastic-configure-curator-close > /dev/null 2>&1; /usr/sbin/so-curator-closed-delete > /dev/null 2>&1; docker exec so-curator curator --config /etc/curator/config/curator.yml /etc/curator/action/close.yml > /de
root 15641 15606 0 15:28 ? 00:00:00 /bin/bash /usr/sbin/so-curator-closed-delete
root 17132 940 0 14:22 ? 00:00:00 /usr/sbin/CRON -f
root 17136 17132 0 14:22 ? 00:00:00 /bin/sh -c /usr/sbin/so-elastic-configure-curator-close > /dev/null 2>&1; /usr/sbin/so-curator-closed-delete > /dev/null 2>&1; docker exec so-curator curator --config /etc/curator/config/curator.yml /etc/curator/action/close.yml > /de
root 17181 17136 0 14:22 ? 00:00:25 /bin/bash /usr/sbin/so-curator-closed-delete
root 17904 940 0 15:12 ? 00:00:00 /usr/sbin/CRON -f
root 17909 17904 0 15:12 ? 00:00:00 /bin/sh -c /usr/sbin/so-elastic-configure-curator-close > /dev/null 2>&1; /usr/sbin/so-curator-closed-delete > /dev/null 2>&1; docker exec so-curator curator --config /etc/curator/config/curator.yml /etc/curator/action/close.yml > /de
root 17939 17909 0 15:12 ? 00:00:03 /bin/bash /usr/sbin/so-curator-closed-delete
root 18632 940 0 15:18 ? 00:00:00 /usr/sbin/CRON -f
root 18635 18632 0 15:18 ? 00:00:00 /bin/sh -c /usr/sbin/so-elastic-configure-curator-close > /dev/null 2>&1; /usr/sbin/so-curator-closed-delete > /dev/null 2>&1; docker exec so-curator curator --config /etc/curator/config/curator.yml /etc/curator/action/close.yml > /de
root 18678 18635 0 15:18 ? 00:00:02 /bin/bash /usr/sbin/so-curator-closed-delete
root 19733 940 0 15:06 ? 00:00:00 /usr/sbin/CRON -f
root 19762 19733 0 15:06 ? 00:00:00 /bin/sh -c /usr/sbin/so-elastic-configure-curator-close > /dev/null 2>&1; /usr/sbin/so-curator-closed-delete > /dev/null 2>&1; docker exec so-curator curator --config /etc/curator/config/curator.yml /etc/curator/action/close.yml > /de
root 19799 19762 0 15:06 ? 00:00:04 /bin/bash /usr/sbin/so-curator-closed-delete
root 19808 940 0 14:17 ? 00:00:00 /usr/sbin/CRON -f
root 19812 19808 0 14:17 ? 00:00:00 /bin/sh -c /usr/sbin/so-elastic-configure-curator-close > /dev/null 2>&1; /usr/sbin/so-curator-closed-delete > /dev/null 2>&1; docker exec so-curator curator --config /etc/curator/config/curator.yml /etc/curator/action/close.yml > /de
root 19855 19812 0 14:17 ? 00:00:29 /bin/bash /usr/sbin/so-curator-closed-delete
root 19924 940 0 14:16 ? 00:00:00 /usr/sbin/CRON -f
root 19932 19924 0 14:16 ? 00:00:00 /bin/sh -c /usr/sbin/so-elastic-configure-curator-close > /dev/null 2>&1; /usr/sbin/so-curator-closed-delete > /dev/null 2>&1; docker exec so-curator curator --config /etc/curator/config/curator.yml /etc/curator/action/close.yml > /de
root 19976 19932 0 14:16 ? 00:00:30 /bin/bash /usr/sbin/so-curator-closed-delete
root 20823 940 0 15:34 ? 00:00:00 /usr/sbin/CRON -f
root 20826 20823 0 15:34 ? 00:00:00 /bin/sh -c /usr/sbin/so-elastic-configure-curator-close > /dev/null 2>&1; /usr/sbin/so-curator-closed-delete > /dev/null 2>&1; docker exec so-curator curator --config /etc/curator/config/curator.yml /etc/curator/action/close.yml > /de
root 20911 20826 0 15:34 ? 00:00:00 /bin/bash /usr/sbin/so-curator-closed-delete
root 21374 940 0 15:24 ? 00:00:00 /usr/sbin/CRON -f
root 21381 21374 0 15:24 ? 00:00:00 /bin/sh -c /usr/sbin/so-elastic-configure-curator-close > /dev/null 2>&1; /usr/sbin/so-curator-closed-delete > /dev/null 2>&1; docker exec so-curator curator --config /etc/curator/config/curator.yml /etc/curator/action/close.yml > /de
root 21450 21381 0 15:24 ? 00:00:01 /bin/bash /usr/sbin/so-curator-closed-delete
root 22591 2 0 15:25 ? 00:00:00 [kworker/9:2]
root 22598 2 0 15:25 ? 00:00:00 [kworker/2:3]
root 22932 940 0 15:29 ? 00:00:00 /usr/sbin/CRON -f
root 22936 22932 0 15:29 ? 00:00:00 /bin/sh -c /usr/sbin/so-elastic-configure-curator-close > /dev/null 2>&1; /usr/sbin/so-curator-closed-delete > /dev/null 2>&1; docker exec so-curator curator --config /etc/curator/config/curator.yml /etc/curator/action/close.yml > /de
root 22974 22936 0 15:29 ? 00:00:00 /bin/bash /usr/sbin/so-curator-closed-delete
root 22985 940 0 15:13 ? 00:00:00 /usr/sbin/CRON -f
root 22989 22985 0 15:13 ? 00:00:00 /bin/sh -c /usr/sbin/so-elastic-configure-curator-close > /dev/null 2>&1; /usr/sbin/so-curator-closed-delete > /dev/null 2>&1; docker exec so-curator curator --config /etc/curator/config/curator.yml /etc/curator/action/close.yml > /de
root 23035 22989 0 15:13 ? 00:00:02 /bin/bash /usr/sbin/so-curator-closed-delete
root 23185 940 0 14:18 ? 00:00:00 /usr/sbin/CRON -f
root 23195 23185 0 14:18 ? 00:00:00 /bin/sh -c /usr/sbin/so-elastic-configure-curator-close > /dev/null 2>&1; /usr/sbin/so-curator-closed-delete > /dev/null 2>&1; docker exec so-curator curator --config /etc/curator/config/curator.yml /etc/curator/action/close.yml > /de
root 23247 23195 0 14:18 ? 00:00:28 /bin/bash /usr/sbin/so-curator-closed-delete
root 23287 2 0 15:25 ? 00:00:00 [kworker/6:0]
root 23305 2 0 15:25 ? 00:00:00 [kworker/8:1]
root 23324 2 0 15:25 ? 00:00:00 [kworker/5:2]
root 23330 2 0 15:25 ? 00:00:00 [kworker/0:1]
root 23609 2 0 15:25 ? 00:00:00 [kworker/7:1]
root 23709 940 0 15:07 ? 00:00:00 /usr/sbin/CRON -f
root 23727 23709 0 15:07 ? 00:00:00 /bin/sh -c /usr/sbin/so-elastic-configure-curator-close > /dev/null 2>&1; /usr/sbin/so-curator-closed-delete > /dev/null 2>&1; docker exec so-curator curator --config /etc/curator/config/curator.yml /etc/curator/action/close.yml > /de
root 23771 23727 0 15:07 ? 00:00:04 /bin/bash /usr/sbin/so-curator-closed-delete
root 23856 940 0 14:23 ? 00:00:00 /usr/sbin/CRON -f
root 23860 23856 0 14:23 ? 00:00:00 /bin/sh -c /usr/sbin/so-elastic-configure-curator-close > /dev/null 2>&1; /usr/sbin/so-curator-closed-delete > /dev/null 2>&1; docker exec so-curator curator --config /etc/curator/config/curator.yml /etc/curator/action/close.yml > /de
root 23893 23860 0 14:23 ? 00:00:24 /bin/bash /usr/sbin/so-curator-closed-delete
root 23943 940 0 14:15 ? 00:00:00 /usr/sbin/CRON -f
root 23948 23943 0 14:15 ? 00:00:00 /bin/sh -c /usr/sbin/so-elastic-configure-curator-close > /dev/null 2>&1; /usr/sbin/so-curator-closed-delete > /dev/null 2>&1; docker exec so-curator curator --config /etc/curator/config/curator.yml /etc/curator/action/close.yml > /de
root 23989 23948 0 14:15 ? 00:00:31 /bin/bash /usr/sbin/so-curator-closed-delete
root 24244 940 0 15:19 ? 00:00:00 /usr/sbin/CRON -f
root 24260 24244 0 15:19 ? 00:00:00 /bin/sh -c /usr/sbin/so-elastic-configure-curator-close > /dev/null 2>&1; /usr/sbin/so-curator-closed-delete > /dev/null 2>&1; docker exec so-curator curator --config /etc/curator/config/curator.yml /etc/curator/action/close.yml > /de
root 24307 24260 0 15:19 ? 00:00:01 /bin/bash /usr/sbin/so-curator-closed-delete
root 24400 2 0 15:35 ? 00:00:00 [kworker/4:0]
root 24434 2 0 15:35 ? 00:00:00 [kworker/1:1]
root 24481 2 0 15:35 ? 00:00:00 [kworker/9:1]
root 24508 2 0 15:25 ? 00:00:00 [kworker/3:0]
root 24643 2 0 15:30 ? 00:00:00 [kworker/1:2]
root 24736 2 0 15:35 ? 00:00:00 [kworker/8:2]
root 24767 2 0 15:35 ? 00:00:00 [kworker/4:2]
root 24919 2 0 15:20 ? 00:00:00 [kworker/7:0]
root 25126 2 0 15:35 ? 00:00:00 [kworker/8:3]
root 25439 2 0 15:35 ? 00:00:00 [kworker/0:0]
root 25552 2 0 15:20 ? 00:00:00 [kworker/1:0]
root 25594 2 0 15:20 ? 00:00:00 [kworker/3:2]
root 25682 2 0 15:35 ? 00:00:00 [kworker/3:1]
root 25697 2 0 15:30 ? 00:00:00 [kworker/15:3]
root 25834 2 0 15:20 ? 00:00:00 [kworker/10:3]
root 26072 2 0 15:35 ? 00:00:00 [kworker/10:0]
root 26094 2 0 15:20 ? 00:00:00 [kworker/0:2]
root 26098 2 0 15:35 ? 00:00:00 [kworker/6:1]
root 26114 2 0 15:30 ? 00:00:00 [kworker/10:2]
root 26121 2 0 15:35 ? 00:00:00 [kworker/11:0]
root 26139 2 0 15:20 ? 00:00:00 [kworker/12:1]
root 26140 2 0 15:20 ? 00:00:00 [kworker/13:1]
root 26355 2 0 11:36 ? 00:00:08 [kworker/u32:0]
root 26385 2 0 15:30 ? 00:00:00 [kworker/14:2]
root 26401 2 0 15:35 ? 00:00:00 [kworker/11:3]
root 26423 2 0 15:30 ? 00:00:00 [kworker/13:3]
root 26439 2 0 15:30 ? 00:00:00 [kworker/5:0]
root 26781 2 0 15:20 ? 00:00:00 [kworker/12:2]
root 26830 2 0 15:20 ? 00:00:00 [kworker/2:1]
root 26894 940 0 14:11 ? 00:00:00 /usr/sbin/CRON -f
root 26898 26894 0 14:11 ? 00:00:00 /bin/sh -c /usr/sbin/so-elastic-configure-curator-close > /dev/null 2>&1; /usr/sbin/so-curator-closed-delete > /dev/null 2>&1; docker exec so-curator curator --config /etc/curator/config/curator.yml /etc/curator/action/close.yml > /de
root 26931 26898 0 14:11 ? 00:00:35 /bin/bash /usr/sbin/so-curator-closed-delete
root 27574 940 0 15:25 ? 00:00:00 /usr/sbin/CRON -f
root 27580 27574 0 15:25 ? 00:00:00 /bin/sh -c /usr/sbin/so-elastic-configure-curator-close > /dev/null 2>&1; /usr/sbin/so-curator-closed-delete > /dev/null 2>&1; docker exec so-curator curator --config /etc/curator/config/curator.yml /etc/curator/action/close.yml > /de
root 27642 27580 0 15:25 ? 00:00:01 /bin/bash /usr/sbin/so-curator-closed-delete
root 27767 940 0 15:08 ? 00:00:00 /usr/sbin/CRON -f
root 27781 27767 0 15:08 ? 00:00:00 /bin/sh -c /usr/sbin/so-elastic-configure-curator-close > /dev/null 2>&1; /usr/sbin/so-curator-closed-delete > /dev/null 2>&1; docker exec so-curator curator --config /etc/curator/config/curator.yml /etc/curator/action/close.yml > /de
root 27814 27781 0 15:08 ? 00:00:04 /bin/bash /usr/sbin/so-curator-closed-delete
root 28138 2 0 15:15 ? 00:00:00 [kworker/4:3]
root 28234 940 0 15:14 ? 00:00:00 /usr/sbin/CRON -f
root 28241 28234 0 15:14 ? 00:00:00 /bin/sh -c /usr/sbin/so-elastic-configure-curator-close > /dev/null 2>&1; /usr/sbin/so-curator-closed-delete > /dev/null 2>&1; docker exec so-curator curator --config /etc/curator/config/curator.yml /etc/curator/action/close.yml > /de
root 28302 28241 0 15:14 ? 00:00:02 /bin/bash /usr/sbin/so-curator-closed-delete
root 28431 940 0 15:35 ? 00:00:00 /usr/sbin/CRON -f
root 28440 28431 0 15:35 ? 00:00:00 /bin/sh -c /usr/sbin/so-elastic-configure-curator-close > /dev/null 2>&1; /usr/sbin/so-curator-closed-delete > /dev/null 2>&1; docker exec so-curator curator --config /etc/curator/config/curator.yml /etc/curator/action/close.yml > /de
root 28487 28440 0 15:35 ? 00:00:00 /bin/bash /usr/sbin/so-curator-closed-delete
root 28940 940 0 14:19 ? 00:00:00 /usr/sbin/CRON -f
root 28943 28940 0 14:19 ? 00:00:00 /bin/sh -c /usr/sbin/so-elastic-configure-curator-close > /dev/null 2>&1; /usr/sbin/so-curator-closed-delete > /dev/null 2>&1; docker exec so-curator curator --config /etc/curator/config/curator.yml /etc/curator/action/close.yml > /de
root 29023 28943 0 14:19 ? 00:00:27 /bin/bash /usr/sbin/so-curator-closed-delete
root 29059 2 0 15:15 ? 00:00:00 [kworker/9:0]
root 29944 940 0 15:30 ? 00:00:00 /usr/sbin/CRON -f
root 29948 29944 0 15:30 ? 00:00:00 /bin/sh -c /usr/sbin/so-elastic-configure-curator-close > /dev/null 2>&1; /usr/sbin/so-curator-closed-delete > /dev/null 2>&1; docker exec so-curator curator --config /etc/curator/config/curator.yml /etc/curator/action/close.yml > /de
root 29989 29948 0 15:30 ? 00:00:00 /bin/bash /usr/sbin/so-curator-closed-delete
root 30365 940 0 15:20 ? 00:00:00 /usr/sbin/CRON -f
root 30393 30365 0 15:20 ? 00:00:00 /bin/sh -c /usr/sbin/so-elastic-configure-curator-close > /dev/null 2>&1; /usr/sbin/so-curator-closed-delete > /dev/null 2>&1; docker exec so-curator curator --config /etc/curator/config/curator.yml /etc/curator/action/close.yml > /de
root 30448 30393 0 15:20 ? 00:00:01 /bin/bash /usr/sbin/so-curator-closed-delete
root 31817 940 0 14:14 ? 00:00:00 /usr/sbin/CRON -f
root 31819 31817 0 14:14 ? 00:00:00 /bin/sh -c /usr/sbin/so-elastic-configure-curator-close > /dev/null 2>&1; /usr/sbin/so-curator-closed-delete > /dev/null 2>&1; docker exec so-curator curator --config /etc/curator/config/curator.yml /etc/curator/action/close.yml > /de
root 31857 31819 0 14:14 ? 00:00:32 /bin/bash /usr/sbin/so-curator-closed-delete
root 32560 2 0 14:28 ? 00:00:03 [kworker/u32:2]
—
You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub <#1369>, or mute the thread <https://github.com/notifications/unsubscribe-auth/AIc055KsUefp2nm2OtHyQdTpBhjC_xF5ks5uvDvegaJpZM4YeBkZ>.
|
Hi @branthale, Are you able to share the contents of |
We have a new In the meantime, if you need to disable that particular script, you can remove the call to
You may then want to reboot. I'm really curious to know if there is some other underlying issue here (in addition to the multiple instances piling up), so if anybody is able to share the contents of |
This log file is massive on my heavy node - 266,180,310 bytes. 20 or so
lines are being added every seconds. I too have multiple (1444) curator
processes running (which would explain the heavier CPU/disk usage since I
ran soup yesterday). This was NOT a system upgraded from 14.0.4.
…On Wed, Nov 14, 2018 at 3:48 PM Doug Burks ***@***.***> wrote:
We have a new securityonion-elastic package currently in testing that
contains the commit above to prevent multiple instances of the cron job:
https://groups.google.com/d/topic/security-onion-testing/JRTmfoycSkQ/discussion
In the meantime, if you need to disable that particular script, you can
remove the call to so-curator-closed-delete in /etc/cron.d/curator-close.
For example, the last line in /etc/cron.d/curator-close currently looks
like this:
/usr/sbin/so-elastic-configure-curator-close > /dev/null 2>&1;
/usr/sbin/so-curator-closed-delete > /dev/null 2>&1; docker exec so-curator
curator --config /etc/curator/config/curator.yml
/etc/curator/action/close.yml > /dev/null 2>&1
So you would change it to look like this:
/usr/sbin/so-elastic-configure-curator-close > /dev/null 2>&1; docker exec
so-curator curator --config /etc/curator/config/curator.yml
/etc/curator/action/close.yml > /dev/null 2>&1
I'm really curious to know if there is some other underlying issue here
(in addition to the multiple instances piling up), so if anybody is able to
share the contents of /var/log/nsm/so-curator-closed-delete.log, it would
be much appreciated!
—
You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub
<#1369 (comment)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/AEu102NbcVqNnQ1NuMZSL09i-MESJXphks5uvI-zgaJpZM4YeBkZ>
.
|
Hi @VeryBaddude, You can run the command shown above to disable the problematic script. You might also need to reboot for good measure. Are you able to provide some of those 20 or so lines from the log so that we can see what is being logged? |
I've run sudo soup and the server rebooted. Here's a sample of what was being written to the log file: Wed Nov 14 22:23:33 UTC 2018 - 339 GB used...exceeds LOG_SIZE_LIMIT (325 GB) - Index deleted ... There are no extra curator processes running right now and load average is now back to normal. Thanks |
Thanks @VeryBaddude ! Were all of the lines in that log like that? Were there any errors or anything else interesting? |
Sorry, no errors that I found when taking a quick look, just the same
looking lines being added continuously. I'll have to do a more throughout
search tomorrow to be sure.
…On Wed, Nov 14, 2018 at 4:49 PM Doug Burks ***@***.***> wrote:
Thanks @VeryBaddude <https://github.com/VeryBaddude> !
Were all of the lines in that log like that?
Were there any errors or anything else interesting?
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#1369 (comment)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/AEu102_D_zu7rdMmnUbKVYXgseUisPXoks5uvJ3kgaJpZM4YeBkZ>
.
|
Hi @dougburks
|
@hostingnuggets @VeryBaddude @branthale, would either of you be able to provide the output of the following? curl -s localhost:9200/_cat/indices Thanks, |
I've just been manually deleting indices so I'm not sure you'll see
anything wrong with my list:
green open logstash-bro-2018.11.10 5zzxwi7nQAuCYL1LpHW40A 6 0 21948486
0 32.3gb 32.3gb
green open logstash-ids-2018.11.06 72H6yszAR0eFfsPeDY9eSQ 6 0 3
0 69.3kb 69.3kb
green open logstash-syslog-2018.11.12 n1gVptvPT4-7fiKS2him3g 6 0 13180
0 3.2mb 3.2mb
green open logstash-syslog-2018.11.10 QUipNbOJRV-N2Igzdoz_YA 6 0 31091
0 7.1mb 7.1mb
green open logstash-ids-2018.11.11 qFmDLS3yTe-wVKgPAWB-OA 6 0 3
0 76.5kb 76.5kb
green open logstash-beats-2018.11.09 SPi5a7aWRuCj4Nc3sV5wag 1 0 26100
0 27.6mb 27.6mb
green open logstash-beats-2018.11.13 CgkTkpSNSNOe171tfPGv0g 1 0 17510
0 17.7mb 17.7mb
green open logstash-beats-2018.11.10 H-2CoVsBStWGExPgo19Niw 1 0 17564
0 17.2mb 17.2mb
green open logstash-bro-2018.11.15 y5gIRmghQ4Out8Hyl57MiA 6 0 14418028
0 26gb 26gb
green open logstash-bro-2018.11.14 Ha9-NYrdRJWhQ5Ypyg6xuQ 6 0 33191429
0 48.8gb 48.8gb
green open logstash-ids-2018.11.07 pRpHzk9WShGS491iRlBQDA 6 0 4 0
109.8kb 109.8kb
green open logstash-bro-2018.11.11 JmzthhD4QQWA0m28_4PXyQ 6 0 22273085
0 32.5gb 32.5gb
green open logstash-syslog-2018.11.11 ePDYkdWeQ3W3N-CXf7NfHA 6 0 30742
0 6.8mb 6.8mb
green open logstash-bro-2018.11.13 R3ATn2s2S_a5kYvLKxsnEw 6 0 31480432
0 45.9gb 45.9gb
green open logstash-ids-2018.11.13 qyplBodeS6qYHkhZ0kUqAA 6 0 1
0 21.7kb 21.7kb
green open logstash-bro-2018.11.09 BzHCIJmpRL6IwZEu43DwsQ 6 0 27920458
0 42.7gb 42.7gb
green open logstash-beats-2018.11.12 bYP4b2k6QGaA-waZEJOnSA 1 0 15093
0 16.6mb 16.6mb
green open logstash-syslog-2018.11.09 P1eZ53_XSB6aLZdfv9zo7g 6 0 38110
0 9.7mb 9.7mb
green open logstash-syslog-2018.11.13 Ibpnsjy2TyeIvgPSc1cYiQ 6 0 36178
0 9.4mb 9.4mb
green open logstash-syslog-2018.11.15 DfTr6dhiSHW_wyeP-9WjVQ 6 0 19083
0 5.5mb 5.5mb
green open logstash-ids-2018.11.14 XPzjpVB_Q6ygq_FeEAFoVg 6 0 8 0
123.7kb 123.7kb
green open logstash-beats-2018.11.14 s9tLrbtKR2CiZN7nnRt8uA 1 0 8175
0 7.2mb 7.2mb
green open logstash-beats-2018.11.11 Mj36s-BFQ-GrKbTCaOYurw 1 0 17217
0 16.8mb 16.8mb
green open logstash-bro-2018.11.12 WuB4FG4gRIaNgHVD_r6hWw 6 0 9607329
0 14gb 14gb
green open logstash-beats-2018.11.15 5E9wRwqbSbe3qYFK8fWPKg 1 0 5891
0 10.9mb 10.9mb
green open logstash-syslog-2018.11.14 keFE2Pd6QNO_l_HlXMVMoQ 6 0 39775
0 10.5mb 10.5mb
…On Thu, Nov 15, 2018 at 8:06 AM weslambert ***@***.***> wrote:
@hostingnuggets <https://github.com/hostingnuggets> @VeryBaddude
<https://github.com/VeryBaddude> @branthale <https://github.com/branthale>,
would either of you be able to provide the output of the following?
curl -s localhost:9200/_cat/indices
Thanks,
Wes
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#1369 (comment)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/AEu109c5sHZoE-l7p7a6joF4vCuBfU5gks5uvXT1gaJpZM4YeBkZ>
.
|
Hi Francois, Those all look to be open indices. If you look in Thanks, |
submitted for testing: |
Hello,
Since I upgraded SO 2 days ago with
soup
on my Unbutu 16.04 LTS sensor it starts a cron job at around 15:11 which seems to never end and gets started again and again. The result is that the load on the server increases and the processes also until to the point that I need to reboot it.Is there anything wrong with the latest upgrade?
Here is an extract of the relevant part of a
ps -ef
showing these many processes:The text was updated successfully, but these errors were encountered: