-
Notifications
You must be signed in to change notification settings - Fork 521
Logstash
We are currently working on integrating the Elastic stack!
From https://www.elastic.co/products/logstash :
Logstash is an open source, server-side data processing pipeline that ingests data from a multitude of sources simultaneously, transforms it, and then sends it to your favorite “stash".
-
Configuration files for Logstash can be found in
/etc/logstash/
. -
Configuration files for custom parsing can be placed in
/etc/logstash/conf.d/
.
After adding your custom configuration file, restart Logstash and check the log(s) for errors:sudo docker restart so-logstash && sudo tail -f /var/log/logstash/logstash.log
-
Other configuration options for Logstash can be found in
/etc/nsm/securityonion.conf
. -
By default, if total available memory is 8GB or greater,
LOGSTASH_HEAP
in/etc/nsm/securityonion.conf
is configured (during setup) to equal 25% of available memory, but no greater than 31GB.See https://www.elastic.co/guide/en/elasticsearch/guide/current/heap-sizing.html#compressed_oops for more details.
You may need to adjust the value for
LOGSTASH_HEAP
depending on your system's performance (runningsudo so-elastic-restart
after). -
Logstash
pipeline.workers
can be adjusted in/etc/logstash/logstash.yml
. -
Logstash
queue.max_bytes
can be adjusted in/etc/logstash/logstash.yml
. -
Logstash logs can be found in
/var/log/logstash/
. -
Logging configuration can be found in
/etc/logstash/log4j2.properties
.
From: https://www.elastic.co/guide/en/logstash/current/persistent-queues.html
By default, Logstash uses in-memory bounded queues between pipeline stages (inputs → pipeline workers) to buffer events. > The size of these in-memory queues is fixed and not configurable.
From: https://www.elastic.co/guide/en/logstash/current/persistent-queues.html
In order to protect against data loss during abnormal termination, Logstash has a persistent queue feature which will > > store the message queue on disk. Persistent queues provide durability of data within Logstash.
If you experience adverse effects using the default memory-backed queue, you can configure a disk-based persistent queue by un-remarking the following lines in /etc/logstash/logstash.yaml
, modifying the values as appropriate, and restarting Logstash:
#queue.type: persisted
#queue.max_bytes: 1gb
sudo docker stop so-logstash && sudo so-elastic-start
More information:
https://www.elastic.co/guide/en/logstash/current/persistent-queues.html
If you want to check for dropped events, you can enable the dead letter queue. This will write all records that are not able to make it into Elasticsearch into a sequentially-numbered file (for each start/restart of Logstash).
This can be achieved by adding the following to /etc/logstash/logstash.yml
:
dead_letter_queue.enable: true
and restarting Logstash:
sudo docker stop so-logstash && sudo so-elastic-start
More information:
https://www.elastic.co/guide/en/logstash/current/dead-letter-queues.html
Logstash process Bro logs, syslog, IDS alerts, etc., formatting said data into many different data fields, as described on the Data Fields page.
- Introduction
- Use Cases
- Hardware Requirements
- Release Notes
- Download/Install
- Booting Issues
- After Installation
- UTC and Time Zones
- Services
- VirtualBox Walkthrough
- VMWare Walkthrough
- Videos
- Architecture
- Cheat Sheet
- Conference
- Elastic Stack
- Elastic Architecture
- Elasticsearch
- Logstash
- Kibana
- ElastAlert
- Curator
- FreqServer
- DomainStats
- Docker
- Redis
- Data Fields
- Beats
- Pre-Releases
- ELSA to Elastic
- Network Configuration
- Proxy Configuration
- Firewall/Hardening
- Email Configuration
- Integrating with other systems
- Changing IP Addresses
- NTP
- Managing Alerts
- Managing Rules
- Adding Local Rules
- Disabling Processes
- Filtering with BPF
- Adjusting PF_RING for traffic
- MySQL Tuning
- Adding a new disk
- High Performance Tuning
- Trimming PCAPs