-
Notifications
You must be signed in to change notification settings - Fork 8.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Stack Monitoring] Align integration packages ingest pipelines with filebeat #137112
Comments
Pinging @elastic/infra-monitoring-ui (Team:Infra Monitoring UI) |
Interesting! I was under the assumption that the current packages used the filebeat modules under the hood, but if this is the case I guess not. Looking forward to trying it out. |
They do but not the mappings. The metricbeat mappings are joined and put into a huge mega mapping, while the integrations keep their own single index mappings to keep it slim (also in line with the index naming schema). So same document but different mappings, and maybe different ingest pipelines too? |
I'm not completely sure but I'm basing this ticket on the fact that the logs datastreams include a definition of pipelines and mappings. It also make sense because the logs are shipped to their individual datastreams at |
I've been looking into this since yesterday, because it will make this one #120416 work.
Assuming you are wondering why we need duplicate the ingestion pipelines on the integration packages, the best way I can answer this, also based on this doc, is that with integration packages we don't enable modules in filebeat, we use An example of how the agent's filebeat config looks like, when elasticsearch package log ingestion is enabled: >sudo elastic-agent inspect output --output default --program filebeat -v
>filebeat:
inputs:
- exclude_files:
- .gz$
- _slowlog.log$
- _access.log$
- _deprecation.log$
id: logfile-elasticsearch.server-66cf0158-333f-439c-a5fe-920a7e2c5d17
index: logs-elasticsearch.server-default
meta:
package:
name: elasticsearch
version: 0.4.2
multiline:
match: after
negate: true
pattern: ^(\[[0-9]{4}-[0-9]{2}-[0-9]{2}|{)
name: elasticsearch-1
paths:
- /tmp/es/8.5.0/logs/ elasticsearch*.log
- /tmp/es/8.5.0/logs/*_server.json
processors:
- add_locale.when.not.regexp.message: ^{
- add_fields:
fields:
service.type: elasticsearch
target: message
- add_fields:
fields:
dataset: elasticsearch.server
namespace: default
type: logs
target: data_stream
- add_fields:
fields:
dataset: elasticsearch.server
target: event
- add_fields:
fields:
id: dfc83057-24a2-4b1c-b87d-ef1c79b1c25d
snapshot: false
version: 8.3.3
target: elastic_agent
- add_fields:
fields:
id: dfc83057-24a2-4b1c-b87d-ef1c79b1c25d
target: agent
revision: 20
type: log
output:
elasticsearch:
api_key: dXVRq4IBb3H3XlXTClBB:xsSRhP6ISeikWs3ACEC6HA
hosts:
- http://localhost:9200 |
There is a few more "catches" because agent's filebeat isn't using modules config:
I've been mostly looking at the Cluster Overview UI, more specifically at the Logs tile. It filters logs by |
So we should be able to copy the raw assets (mappings/pipelines) from metricbeat modules over to the package ? I suppose the pipelines should work right away but the mappings probably need more work because the log stream are specialized and separated per type of log |
Yes. We will need to adapt a few things for pipelines, because some modules support v7 and v8 and packages will only support v8. For Elasticsearch we can even remove the support for Mappings will need more work. |
Draft of ES logs pipeline elastic/integrations#4033 |
We've checked all three tasks. Closing this ticket since there is nothing left to do. |
Summary
We should verify that the logs mappings and ingest pipelines defined in filebeat for the elasticsearch, kibana and logstash modules are aligned with the ones defined in the corresponding integration packages.
AC
Related UI work #120416
The text was updated successfully, but these errors were encountered: