Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[alerts] add doc on the event log ILM policy #82435

Closed
pmuellr opened this issue Nov 3, 2020 · 2 comments · Fixed by #92736
Closed

[alerts] add doc on the event log ILM policy #82435

pmuellr opened this issue Nov 3, 2020 · 2 comments · Fixed by #92736
Assignees
Labels
Feature:Alerting Feature:EventLog needs_docs Team:ResponseOps Label for the ResponseOps team (formerly the Cases and Alerting teams)

Comments

@pmuellr
Copy link
Member

pmuellr commented Nov 3, 2020

A recent post to the community Slack channel asked about cleaning up the event log indices:

Hi everyone … we seem to be collecting Kibana event log indices in our ES cluster each time we upgrade. Right now doing a GET /_cat/indices shows we have one .kibana-event-log-7.9.3-* index, two .kibana-event-log-7.9.1-* indices and others for v7.9.0, 7.8.1 and 7.8.0.

On the one hand they’re all pretty small (around the 5kb mark) but it would be good to know if there’s some setting we’re missing to automagically tidy them up on upgrade or whether we can (safely) remove all but the indices named for the latest version?

my response

The .kibana-event-log* indices are currently only being used by Kibana alerting as a historical log of the processing that alerting is doing in the background. There’s an ILM policy which controls when they roll over, are deleted, etc.

By default, the delete phase is set to 90 days from rollover, via the ILM policy associated with these indices. Feel free to c change it. The policy is created if it doesn’t exist, but will not be updated if it does exist, so won’t get overwritten by Kibana.

You can edit the ILM policy from within Kibana, via
Stack Management -> Index Lifecycle Policies -> kibana-event-log-policy

The data in these logs is used to generate some data in the alerting UIs, and can be used for diagnostic purposes, if you’re having problems with alerting.

I don’t think we have any doc on this, so I’ll open an issue to make that happen.

We should add some docs about this, I don't think we have any currently. If I had to pick a page to put it on, I'd put it here:

https://www.elastic.co/guide/en/kibana/current/alerting-scale-performance.html

I do kind of worry about discoverability. Not sure if there is a doc page that lists all the resources Kibana creates for itself, and if users can modify them, etc, but we could certainly add the event log indices, ILM policy, and index template to this list. Only the ILM policy is designed to be edited by customers.

@pmuellr pmuellr added Feature:Alerting Team:ResponseOps Label for the ResponseOps team (formerly the Cases and Alerting teams) Feature:EventLog labels Nov 3, 2020
@elasticmachine
Copy link
Contributor

Pinging @elastic/kibana-alerting-services (Team:Alerting Services)

This was referenced Nov 3, 2020
@pmuellr
Copy link
Member Author

pmuellr commented Nov 3, 2020

... Not sure if there is a doc page that lists all the resources Kibana creates for itself, and if users can modify them, etc, but we could certainly add the event log indices, ILM policy, and index template to this list. ...

@kobelb indicated in our triage meeting that such a doc does not currently exist, so I think just doc'ing this where I suggested would be fine for now.

@pmuellr pmuellr self-assigned this Feb 24, 2021
pmuellr added a commit to pmuellr/kibana that referenced this issue Mar 1, 2021
resolves elastic#82435

I added this to the alert/actions setting doc, as that seemed like the best
place for now.  Just provided a brief description, name of the policy, mentioned
we create it but never modify it, provided the default values, and mentioned
it could be updated by customers for their environment.  Not sure we want to
provide more info than that.
pmuellr added a commit that referenced this issue Mar 25, 2021
resolves #82435

Just provided a brief description, name of the policy, mentioned
we create it but never modify it, provided the default values, and mentioned
it could be updated by customers for their environment.  Not sure we want to
provide more info than that.
pmuellr added a commit to pmuellr/kibana that referenced this issue Mar 25, 2021
resolves elastic#82435

Just provided a brief description, name of the policy, mentioned
we create it but never modify it, provided the default values, and mentioned
it could be updated by customers for their environment.  Not sure we want to
provide more info than that.
pmuellr added a commit that referenced this issue Mar 25, 2021
resolves #82435

Just provided a brief description, name of the policy, mentioned
we create it but never modify it, provided the default values, and mentioned
it could be updated by customers for their environment.  Not sure we want to
provide more info than that.
@kobelb kobelb added the needs-team Issues missing a team label label Jan 31, 2022
@botelastic botelastic bot removed the needs-team Issues missing a team label label Jan 31, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Feature:Alerting Feature:EventLog needs_docs Team:ResponseOps Label for the ResponseOps team (formerly the Cases and Alerting teams)
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants