Skip to content

Log Aggregation using Elasticsearch, Fluentd and Kibana

License

Notifications You must be signed in to change notification settings

deploymentking/efk

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Log Aggregation using Elasticsearch, Fluentd and Kibana

Rubocop Build Status Coverage Status License: MIT

This project allows for the quick deployment of a fully functioning EFK Stack.

  • (E)lasticsearch
  • (F)luentD
  • (K)ibana

The intended use is as a local development environment to try out Fluentd configuration before deployment to an environment. I have also included an optional NGINX web server that enables Basic authentication access control to kibana (if using the XPack extension). In addition to this, there are a collection of "sources" that feed the EFK stack. For example the folder via-td-agent contains Docker files that can launch and configure an Ubuntu box that will install the td-agent and run a Java JAR from which we can control the type of logging to be sent to EFK

Table of Contents

Requirements

All these instructions are for macOS only.

Pre-requisites

Install

Install supporting tools

brew install bash curl kubernetes-cli kubernetes-helm
brew cask install docker minikube virtualbox

Quick & Easy Startup - OSS

Ensure the .env file has the setting FLAVOUR_EFK set to a value of -oss

docker-compose up

You will then be able to access the stack via the following:

Quick & Easy Startup - Default (with XPack Extensions)

Ensure the .env file has the setting FLAVOUR_EFK set to an empty string.

docker-compose -f docker-compose.yml -f nginx/docker-compose.yml up

You will then be able to access the stack via the following:

When accessing via the NGINX container you do not need to supply the username and password credentials as it uses the htpasswd.users file which contains the default username and password of kibana and kibana. If you wish to use different credentials then replace the text in the file using the following command htpasswd -b ./nginx/config/htpasswd.users newuser newpassword

⚠️ You must use the --build flag on docker-compose when switching between FLAVOUR_EFK values e.g.

docker-compose up --build

Log Sources

Logging via driver

This is a simple log source that simply uses the log driver feature of Docker

logging:
  driver: fluentd
  options:
    fluentd-address: localhost:24224
    tag: httpd.access

The docker image httpd:alpine is used to create a simple Apache web server. This will output the logs of the httpd process to STDOUT which gets picked up by the logging driver above.

Launch Command

docker-compose -f docker-compose.yml -f via-logging-driver/docker-compose.yml up

Logging via td-agent

This source is based on an Ubuntu box with OpenJDK Java installed along with the td-agent. The executable is a JAR stored in the executables folder that will log output controlled by log4j2 via slf4j. The JAR is controlled by the java-logger project mentioned here. See the README of that project for further information.

Launch Command

docker-compose -f docker-compose.yml -f via-td-agent/docker-compose.yml up --build

Accessing the Fluentd UI

If the environment variable FLUENTD_UI_ENABLED is set to true in via-td-agent's fluentd.properties file then the UI will be available once the stack is up and running otherwise the logs will be tailed to keep the container alive. The following command will need to be used in order to start the Fluentd UI if it is not running.

docker exec -it agent fluentd-ui start

You will then be able to access the configuration of td-agent via the following:

After the credentials above have been submitted, click the "Setup td-agent" button and then click the "Create" button. The dashboard should be displayed. From here it is fairly obvious what you can change by navigating around the UI.

Changing the JAR log output type

In order to change the kind of logging output from the JAR, e.g. from single line logs to multi-line logs, the environment variable LOGGER_ENTRY_POINT needs to be set. This can be achieved via the .env file found in the root of the project. Simply uncomment the desired class.

Changing the td-agent configuration file

To try out different configuration options simply change the FLUENTD_CONF setting in the via-td-agent/docker-compose.yml environment section to one of the files that are listed in via-td-agent/config and then rebuild the stack.

# ctrl+c to stop the stack (if not running in detached mode)
docker-compose -f docker-compose.yml -f via-td-agent/docker-compose.yml down
docker image ls --quiet --filter 'reference=efk_agent:*' | xargs docker rmi -f
docker-compose -f docker-compose.yml -f via-td-agent/docker-compose.yml up --build

Changing the contents of a td-agent conf file

In order to test changes made to a config file that is already configured to be used by the td-agent service simply make the changes in the file on the host machine and then restart the td-agent service. The file is linked to the container via the volume mount so the changes are immediately available to the container.

docker exec -it agent /bin/bash
service td-agent restart

Adding a new environment variable for use in the container

In order to make a new environment variable available to the td-agent process in the container it is necessary to add the variable to a number of files to make sure it gets propagated successfully. The files to update are:

File Description
.env Contains a list of all the environment variables that can be passed to the container
via-td-agent/docker-compose.yml This passes a sub-set of the environment variables to the docker container
via-td-agent/executables/entrypoint.sh This takes a sub-set of the environment variables within the container and makes them available to the td-agent service via /etc/default/td-agent
via-td-agent/config/td-agent-*.conf The configuration files can make use of any variables defined in /etc/default/td-agent

If you run the command below within this repo you will see an example of which files need to be changed and how.

git diff 66af1ad..857f181

Logging via fluent-bit

The following command will launch a kubernetes cluster into minikube and ensure there is a fluent-bit daemon set installed. In addition to that there is an apache image that is launched to test the fluent-bit setup will forward logs to the docker composition setup prior to running this script.

Launch Command

cd via-fluent-bit && ./start-k8s.sh

You will then be able to access the apache instance via the following:

open "http://$(minikube ip):30080"

Encrypted logging with TLS

Use the following command to create some certificates to be used for testing purposes

openssl req -new -x509 -sha256 -days 1095 -newkey rsa:2048 -keyout fluentd.key -out fluentd.crt

# Country Name (2 letter code) []:GB
# State or Province Name (full name) []:England
# Locality Name (eg, city) []:Frome
# Organization Name (eg, company) []:Think Stack Limited
# Organizational Unit Name (eg, section) []:Think Stack Limited Certificate Authority
# Common Name (eg, fully qualified host name) []:fluentd
# Email Address []:[email protected]
echo -e '\x93\xa9debug.tls\xceZr\xbc1\x81\xa3foo\xa3bar' | openssl s_client -connect localhost:24224

Getting started with Kibana

Once the stack has launched it should be possible to access kibana via http://localhost:5601. It is not possible to instantly see the log output, first it is necessary to setup an index pattern. Kibana uses index patterns to retrieve data from Elasticsearch indices for things like visualizations.

Define index pattern

  • Click this link to navigate to the "Create index pattern" page
  • Step 1 of 2: Define index pattern: type fluentd* into the "Index pattern" text box
  • Click the "Next step" button
  • Step 2 of 2: Configure settings: select @timestamp in the "Time Filter field name" drop-down list box
  • Click the "Create index pattern" button

View logs

  • Click this link to navigate to the Discover page
  • Click the "Auto-refresh" button at the top right of the page
  • Select 5 seconds from the drop-down panel that immediately appears
  • Select the fields you wish to summarize in the table next to the left hand menu by hovering over the field name and clicking the contextual "add" button. Select at least the "log" and "message" fields
  • The selected fields should move from the "Available Fields" section to the "Selected Fields" section
  • If using the logging driver you can trigger new logs to appear by clicking this link and refreshing the page a few times

Getting started with ElasticHQ

This application is used to perform analysis of metrics in the elasticsearch cluster. When the application UI loads, the address of the elasticsearch cluster needs to be added in order to view the metric. The default value is localhost:9200. This should be changed to elasticsearch:9200 because the connection context is between running docker containers in the efk network.

Launch Command

docker-compose -f docker-compose.yml -f via-td-agent/docker-compose.yml -f elastichq/docker-compose.yml up --build

You will then be able to access the configuration of td-agent via the following:

Useful Commands

Docker status commands

watch 'docker ps -a --format "table {{.ID}}\t{{.Status}}\t{{.Names}}\t{{.Ports}}"'

General minikube commands

kubectl cluster-info
kubectl cluster-info dump
kubectl config view

Test internet connectivity in minikube

minikube ssh
# Commands to run during the SSH connection to the minikube VM
cat /etc/resolv.conf | egrep -v '^#'
ip route
ping -c 4 google.com

Tail the logs of fluent-bit

kubectl logs -f --namespace=logging $(kubectl get pods --namespace=logging -l k8s-app=fluent-bit-logging -o name) -c fluent-bit

Useful Elasticsearch commands

curl -X GET http://localhost:9200/_cat/indices?v
curl -X GET http://localhost:9200/_cluster/health?pretty=true

Docker Clean Up

When running multiple stack updates or rebuilding stacks it is easy to build up a collection of dangling containers, images and volumes that can be purged from your system. I use the following to perform a cleanup of my Docker environment.

# Delete all exited containers and their associated volume
docker ps --quiet --filter status=exited | xargs docker rm -v
# Delete all images, containers, volumes, and networks — that aren't associated with a container
docker system prune --force --volumes

⚠️ Quite destructive commands follow...

# Delete all containers
docker ps --quiet --all | xargs docker rm -f
# Delete forcefully all images that match the name passed into the filter e.g. efk_*
docker image ls --quiet --filter 'reference=efk_*:*' | xargs docker rmi -f
# Delete everything? EVERYTHING!
docker system prune --all

Testing

See TESTING.md.

Contributing

Please do not hesitate to open an issue with any questions or problems.

See CONTRIBUTING.md.

References

External Projects

Useful Articles