Skip to content
This repository has been archived by the owner on Apr 16, 2021. It is now read-only.

Logstash

weslambert edited this page Mar 8, 2018 · 41 revisions

We are currently working on integrating the Elastic stack!

Description

From https://www.elastic.co/products/logstash :

Logstash is an open source, server-side data processing pipeline that ingests data from a multitude of sources simultaneously, transforms it, and then sends it to your favorite “stash".

Configuration

Performance

pipeline.workers

The number of workers that will, in parallel, execute the filter and output stages of the pipeline. If you find that events are backing up, or that the CPU is not saturated, consider increasing this number to better utilize machine processing power.

https://www.elastic.co/guide/en/logstash/current/logstash-settings-file.html

This setting can be adjusted in /etc/logstash/logstash.yml.

LOGSTASH_HEAP

By default, if total available memory is 8GB or greater, LOGSTASH_HEAP in /etc/nsm/securityonion.conf is configured (during setup) to equal 25% of available memory, but no greater than 31GB.

See https://www.elastic.co/guide/en/elasticsearch/guide/current/heap-sizing.html#compressed_oops for more details.

You may need to adjust the value for LOGSTASH_HEAP depending on your system's performance (running sudo so-elastic-restart after).

Parsing

Configuration files for custom parsing can be placed in /etc/logstash/custom. These will automatically get copied over to /etc/logstash/conf.d during the starting of Logstash.

After adding your custom configuration file(s), restart Logstash and check the log(s) for errors:

sudo docker stop so-logstash && sudo so-elastic-start && sudo tail -f /var/log/logstash/logstash.log

Mapping Templates

Logstash loads default mapping templates for Elasticsearch to use from /etc/logstash.

The two templates currently being used include:

logstash-template.json - applies to logstash-* indices

beats-template.json - applies to logstash-beats-* indices

You can check to see if these templates are loaded by typing something like the following at a command prompt:

curl localhost:9200/_template/logstash?pretty

Logging

Log file settings can be adjusted in /etc/logstash/log4j2.properties. Currently, logs are set to rollover daily, and configured to be deleted after 7 days.

Options

You can specify your own custom options to be appended to the Logstash startup command, by editing LOGSTASH_OPTIONS in /etc/nsm/securityonion.conf.

Queue

Memory-backed

From: https://www.elastic.co/guide/en/logstash/current/persistent-queues.html

By default, Logstash uses in-memory bounded queues between pipeline stages (inputs → pipeline workers) to buffer events. > The size of these in-memory queues is fixed and not configurable.

Persistent

From: https://www.elastic.co/guide/en/logstash/current/persistent-queues.html

In order to protect against data loss during abnormal termination, Logstash has a persistent queue feature which will > > store the message queue on disk. Persistent queues provide durability of data within Logstash.

If you experience adverse effects using the default memory-backed queue, you can configure a disk-based persistent queue by un-remarking the following lines in /etc/logstash/logstash.yaml, modifying the values as appropriate, and restarting Logstash:

#queue.type: persisted
#queue.max_bytes: 1gb

sudo docker stop so-logstash && sudo so-elastic-start

More information:
https://www.elastic.co/guide/en/logstash/current/persistent-queues.html

Queue Max Bytes

The total capacity of the queue in number of bytes. Make sure the capacity of your disk drive is greater than the value >you specify here. If both queue.max_events and queue.max_bytes are specified, Logstash uses whichever criteria is reached >first.

Dead Letter Queue

If you want to check for dropped events, you can enable the dead letter queue. This will write all records that are not able to make it into Elasticsearch into a sequentially-numbered file (for each start/restart of Logstash).

This can be achieved by adding the following to /etc/logstash/logstash.yml:

dead_letter_queue.enable: true

and restarting Logstash:

sudo docker stop so-logstash && sudo so-elastic-start

The dead letter queue files are located in /nsm/logstash/dead_letter_queue/main/.

More information:
https://www.elastic.co/guide/en/logstash/current/dead-letter-queues.html

Data Fields

Logstash process Bro logs, syslog, IDS alerts, etc., formatting said data into many different data fields, as described on the Data Fields page.

Log

The Logstash log is located at /var/log/logstash/logstash.log.

Clone this wiki locally