-
Notifications
You must be signed in to change notification settings - Fork 4.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
beats (lumberjack) input plugin for filebeat #10890
Comments
+1 |
It's our understanding that the lumberjack protocol has facilities for backing off under load that the normal TCP input/output doesn't have. We'd love to replace logstash with a beat that has a lumberjack/beats input and preferably a lumberjack/beats output so that it can act as an aggregator. |
We are hoping to have this so we can front Elasticsearch with a filebeat collector running a beats input so that we dont have to expose Elasticsearch directly and so that we dont have to distribute credentials to beats clients. This would help us replace Logstash entirely and simply use a single beat + elasticsearch |
@strawgate I think we should do it. Also, another way that you can do Beats -> Elasticsearch is to use Elasticsearch's API Key instead of username/password and you can use API as part of the deployment. The support is fairly new in Beats and it will be released in 7.6 see https://www.elastic.co/guide/en/beats/filebeat/7.x/beats-api-keys.html |
+1. Our rationale for this: proxying filebeat traffic unmodified between Amazon VPCs over VPC peering, with compression enabled. We currently go Filebeat -> Redis -> Logstash -> Elasticsearch, with Filebeat in various VPCs, and Redis/Logstash/Elasticsearch in a single VPC (with AZ specific deployments). We would then go Filebeat (source VPC) -> lumberjack with compression -> Filebeat (log storage VPC) -> Logstash -> Elasticsearch. Traffic volumes are high enough that the cross AZ/cross VPC traffic charges are measureable :) An alternative would be compression support on the Redis output + Redis input on logstash... |
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. |
This issue doesn't have a |
Describe the enhancement:
Allow filebeat to receive messages using the lumberjack protocol, e.g. from an upstream beat.
This would complement its existing abilities to receive syslog and raw TCP/UDP.
Describe a specific use case for the enhancement or feature:
This could replace logstash for setups where you want to accept logs centrally from many filebeat instances, but you don't want to run a heavyweight logstash instance. For example, you just want to write to Elasticsearch with minimal filtering, or you can do everything you need in ES ingest pipelines.
Filebeat already has the ability to write directly to Elasticsearch. But having lots of satellite Filebeat instances talking directly to Elasticsearch is not a good idea: you have to expose your ES cluster to all nodes, and you have to store ES write credentials on all nodes, and you have to configure all nodes with consistent index patterns etc.
As a simple and high-performance way to get beats into kafka: again, without using logstash, and without configuring every single beat with kafka specifics (hosts, credentials, topic, partitioning etc)
As a lightweight aggregator or proxy for lumberjack
Alternatives:
Could also enhance filebeat to read messages from redis, as an alternative to lumberjack. Filebeat already has redis output.
(The existing filebeat redis input doesn't actually accept messages from redis; it just reads redis slowlogs)
The text was updated successfully, but these errors were encountered: