This project allows for the quick deployment of a fully functioning EFK Stack.
- (E)lasticsearch
- (F)luentD
- (K)ibana
The intended use is as a local development environment to try out Fluentd configuration before deployment to an environment.
I have also included an optional NGINX web server that enables Basic authentication access control to kibana (if using
the XPack extension). In addition to this, there are a collection of "sources" that feed the EFK stack. For example the
folder via-td-agent
contains Docker files that can launch and configure an Ubuntu box that will install the td-agent
and run a Java JAR from which we can control the type of logging to be sent to EFK
- Requirements
- Quick & Easy Startup - OSS
- Quick & Easy Startup - Default (with XPack Extensions)
- Log Sources
- Getting started with Kibana
- Getting started with ElasticHQ
- Useful Commands
- Docker Clean Up
- Testing
- Contributing
- References
- External Projects
- Useful Articles
All these instructions are for macOS only.
Install
brew install bash curl kubernetes-cli kubernetes-helm
brew cask install docker minikube virtualbox
Ensure the .env file has the setting FLAVOUR_EFK
set to a value of -oss
docker-compose up
You will then be able to access the stack via the following:
- Elasticsearch @ http://localhost:9200
- Kibana @ http://localhost:5601
Ensure the .env file has the setting FLAVOUR_EFK
set to an empty string.
docker-compose -f docker-compose.yml -f nginx/docker-compose.yml up
You will then be able to access the stack via the following:
- Kibana @ http://localhost:8080
When accessing via the NGINX container you do not need to supply the username and password credentials as it uses the
htpasswd.users
file which contains the default username and password of kibana
and kibana
. If you wish to use
different credentials then replace the text in the file using the following command
htpasswd -b ./nginx/config/htpasswd.users newuser newpassword
--build
flag on docker-compose when switching between FLAVOUR_EFK
values e.g.
docker-compose up --build
This is a simple log source that simply uses the log driver feature of Docker
logging:
driver: fluentd
options:
fluentd-address: localhost:24224
tag: httpd.access
The docker image httpd:alpine
is used to create a simple Apache web server. This will output the logs of the httpd
process to STDOUT which gets picked up by the logging driver above.
docker-compose -f docker-compose.yml -f via-logging-driver/docker-compose.yml up
This source is based on an Ubuntu box with OpenJDK Java installed along with the td-agent. The executable is a JAR stored in the executables folder that will log output controlled by log4j2 via slf4j. The JAR is controlled by the java-logger project mentioned here. See the README of that project for further information.
docker-compose -f docker-compose.yml -f via-td-agent/docker-compose.yml up --build
If the environment variable FLUENTD_UI_ENABLED
is set to true in via-td-agent's fluentd.properties
file then the UI
will be available once the stack is up and running otherwise the logs will be tailed to keep the container alive. The
following command will need to be used in order to start the Fluentd UI if it is not running.
docker exec -it agent fluentd-ui start
You will then be able to access the configuration of td-agent via the following:
- Fluentd UI @ http://localhost:9292
- username: admin
- password: changeme
After the credentials above have been submitted, click the "Setup td-agent" button and then click the "Create" button. The dashboard should be displayed. From here it is fairly obvious what you can change by navigating around the UI.
In order to change the kind of logging output from the JAR, e.g. from single line logs to multi-line logs, the environment
variable LOGGER_ENTRY_POINT
needs to be set. This can be achieved via the .env file found in the root of the project.
Simply uncomment the desired class.
To try out different configuration options simply change the FLUENTD_CONF
setting in the via-td-agent/docker-compose.yml
environment section to one of the files that are listed in via-td-agent/config
and then rebuild the stack.
# ctrl+c to stop the stack (if not running in detached mode)
docker-compose -f docker-compose.yml -f via-td-agent/docker-compose.yml down
docker image ls --quiet --filter 'reference=efk_agent:*' | xargs docker rmi -f
docker-compose -f docker-compose.yml -f via-td-agent/docker-compose.yml up --build
In order to test changes made to a config file that is already configured to be used by the td-agent service simply make the changes in the file on the host machine and then restart the td-agent service. The file is linked to the container via the volume mount so the changes are immediately available to the container.
docker exec -it agent /bin/bash
service td-agent restart
In order to make a new environment variable available to the td-agent process in the container it is necessary to add the variable to a number of files to make sure it gets propagated successfully. The files to update are:
File | Description |
---|---|
.env | Contains a list of all the environment variables that can be passed to the container |
via-td-agent/docker-compose.yml | This passes a sub-set of the environment variables to the docker container |
via-td-agent/executables/entrypoint.sh | This takes a sub-set of the environment variables within the container and makes them available to the td-agent service via /etc/default/td-agent |
via-td-agent/config/td-agent-*.conf | The configuration files can make use of any variables defined in /etc/default/td-agent |
If you run the command below within this repo you will see an example of which files need to be changed and how.
git diff 66af1ad..857f181
The following command will launch a kubernetes cluster into minikube and ensure there is a fluent-bit daemon set installed. In addition to that there is an apache image that is launched to test the fluent-bit setup will forward logs to the docker composition setup prior to running this script.
cd via-fluent-bit && ./start-k8s.sh
You will then be able to access the apache instance via the following:
open "http://$(minikube ip):30080"
Use the following command to create some certificates to be used for testing purposes
openssl req -new -x509 -sha256 -days 1095 -newkey rsa:2048 -keyout fluentd.key -out fluentd.crt
# Country Name (2 letter code) []:GB
# State or Province Name (full name) []:England
# Locality Name (eg, city) []:Frome
# Organization Name (eg, company) []:Think Stack Limited
# Organizational Unit Name (eg, section) []:Think Stack Limited Certificate Authority
# Common Name (eg, fully qualified host name) []:fluentd
# Email Address []:[email protected]
echo -e '\x93\xa9debug.tls\xceZr\xbc1\x81\xa3foo\xa3bar' | openssl s_client -connect localhost:24224
Once the stack has launched it should be possible to access kibana via http://localhost:5601. It is not possible to instantly see the log output, first it is necessary to setup an index pattern. Kibana uses index patterns to retrieve data from Elasticsearch indices for things like visualizations.
- Click this link to navigate to the "Create index pattern" page
- Step 1 of 2: Define index pattern: type
fluentd*
into the "Index pattern" text box - Click the "Next step" button
- Step 2 of 2: Configure settings: select
@timestamp
in the "Time Filter field name" drop-down list box - Click the "Create index pattern" button
- Click this link to navigate to the Discover page
- Click the "Auto-refresh" button at the top right of the page
- Select
5 seconds
from the drop-down panel that immediately appears - Select the fields you wish to summarize in the table next to the left hand menu by hovering over the field name and clicking the contextual "add" button. Select at least the "log" and "message" fields
- The selected fields should move from the "Available Fields" section to the "Selected Fields" section
- If using the logging driver you can trigger new logs to appear by clicking this link and refreshing the page a few times
This application is used to perform analysis of metrics in the elasticsearch cluster. When the application UI loads, the
address of the elasticsearch cluster needs to be added in order to view the metric. The default value is localhost:9200.
This should be changed to elasticsearch:9200
because the connection context is between running docker containers in the
efk
network.
docker-compose -f docker-compose.yml -f via-td-agent/docker-compose.yml -f elastichq/docker-compose.yml up --build
You will then be able to access the configuration of td-agent via the following:
- Fluentd UI @ http://localhost:5000
watch 'docker ps -a --format "table {{.ID}}\t{{.Status}}\t{{.Names}}\t{{.Ports}}"'
kubectl cluster-info
kubectl cluster-info dump
kubectl config view
minikube ssh
# Commands to run during the SSH connection to the minikube VM
cat /etc/resolv.conf | egrep -v '^#'
ip route
ping -c 4 google.com
kubectl logs -f --namespace=logging $(kubectl get pods --namespace=logging -l k8s-app=fluent-bit-logging -o name) -c fluent-bit
curl -X GET http://localhost:9200/_cat/indices?v
curl -X GET http://localhost:9200/_cluster/health?pretty=true
When running multiple stack updates or rebuilding stacks it is easy to build up a collection of dangling containers, images and volumes that can be purged from your system. I use the following to perform a cleanup of my Docker environment.
# Delete all exited containers and their associated volume
docker ps --quiet --filter status=exited | xargs docker rm -v
# Delete all images, containers, volumes, and networks — that aren't associated with a container
docker system prune --force --volumes
# Delete all containers
docker ps --quiet --all | xargs docker rm -f
# Delete forcefully all images that match the name passed into the filter e.g. efk_*
docker image ls --quiet --filter 'reference=efk_*:*' | xargs docker rmi -f
# Delete everything? EVERYTHING!
docker system prune --all
See TESTING.md.
Please do not hesitate to open an issue with any questions or problems.
See CONTRIBUTING.md.
- Install Elasticsearch with Docker
- EFK Docker Images
- Fluentd Quickstart
- Fluent Bit
- Log4J2 Pattern Layout
- ElasticHQ
- log4j2 Appenders
- log4j2 Configuration
- kubectl Cheat Sheet
- How to remove docker images containers and volumes
- How To Centralize Your Docker Logs with Fluentd and ElasticSearch on Ubuntu 16.04
- Free Alternative to Splunk Using Fluentd
- Elasticsearch monitoring and management plugins
- Add entries to pod hosts file with host aliases
- Exploring fluentd via EFK Stack for Docker Logging
- Logging Kubernetes Pods using Fluentd and Elasticsearch
- Sharing a local registry with minikube
- Don’t be terrified of building native extensions
- Parse Syslog Messages Robustly
- Docker Logging via EFK Stack with Docker Compose
- Fluent Bit on Kubernetes