Skip to content
This repository has been archived by the owner on May 6, 2020. It is now read-only.

Commit

Permalink
fix(elastic search): Allow the elastic search plugin to index via nam…
Browse files Browse the repository at this point in the history
…espace

You can provide the elastic search plugin a path to look for a value to use as the index name such as . However, if you do this you cant append a logstash style dateformat on the end of the index name which helps with archiving old log data. So we are going to monkey patch the plugin to provide this functionality until the pull request is accepted.
  • Loading branch information
Jonathan Chauncey committed Oct 19, 2016
1 parent af15873 commit f113aac
Show file tree
Hide file tree
Showing 5 changed files with 423 additions and 6 deletions.
13 changes: 13 additions & 0 deletions LICENSE
Original file line number Diff line number Diff line change
@@ -0,0 +1,13 @@
Copyright 2016 Engine Yard, Inc.

Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at

http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
15 changes: 15 additions & 0 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -40,6 +40,21 @@ This plugin is used to decorate all log entries with kubernetes metadata.
### [fluent-plugin-elasticsearch](https://github.com/uken/fluent-plugin-elasticsearch)
Allows fluentd to send log data to an elastic search cluster. You must specify an `ELASTICSEARCH_HOST` environment variable for this plugin to work.

* `ELASTICSEARCH_HOST="some.host"`
* `ELASTICSEARCH_SCHEME="http/https"`
* `ELASTICSEARCH_PORT="9200"`
* `ELASTICSEARCH_USER="username"`
* `ELASTICSEARCH_PASSWORD="password"`
* `ELASTICSEARCH_LOGSTASH_FORMAT="true/false"` - Creates indexes in the format `index_prefix-YYYY.MM.DD`
* `ELASTICSEARCH_TARGET_INDEX_KEY="kubernetes.namespace_name"` - Allows the index name to come from within the log message map. See example message format below. This allows the user to have an index per namespace, container name, or other dynamic value.
* `ELASTICSEARCH_TARGET_TYPE_KEY="some.key"` - Allows the user to set _type to a custom value found in the map.
* `ELASTICSEARCH_INCLUDE_TAG_KEY="true/false"` - Merge the fluentd tag back into the log message map.
* `ELASTICSEARCH_INDEX_NAME="fluentd"` - Set the index name where all events will be sent.
* `ELASTICSEARCH_LOGSTASH_PREFIX="logstash"` - Set the logstash prefix variable which is used when you want to use logstash format without specifying `ELASTICSEARCH_TARGET_INDEX_KEY`.
* `ELASTICSEARCH_TIME_KEY=""` - specify where the plugin can find the timestamp used for the `@timestamp` field
* `ELASTICSEARCH_TIME_KEY_FORMAT=""` - specify the format of `ELASTICSEARCH_TIME_KEY`
* `ELASTICSEARCH_TIME_KEY_EXCLUDE_TIMESTAMP=""` - If `ELASTICSEARCH_TIME_KEY` specified dont set ``@timestamp

### [fluent-plugin-remote_syslog](https://github.com/dlackty/fluent-plugin-remote_syslog)
This plugin allows `fluentd` to send data to a remote syslog endpoint like [papertrail](http://papertrailapp.com). You can configure `fluentd` to talk to multiple remote syslog endpoints by using the following scheme:
* `SYSLOG_HOST_1=some.host`
Expand Down
2 changes: 1 addition & 1 deletion rootfs/Dockerfile
Original file line number Diff line number Diff line change
Expand Up @@ -13,7 +13,7 @@ RUN buildDeps='g++ gcc make ruby-dev'; \
bundle install --gemfile=/opt/fluentd/deis-output/Gemfile && \
rake --rakefile=/opt/fluentd/deis-output/Rakefile build && \
fluent-gem install --no-document fluent-plugin-kubernetes_metadata_filter -v 0.25.3 && \
fluent-gem install --no-document fluent-plugin-elasticsearch -v 1.6.0 && \
fluent-gem install --no-document fluent-plugin-elasticsearch -v 1.7.0 && \
fluent-gem install --no-document fluent-plugin-remote_syslog -v 0.3.2 && \
fluent-gem install --no-document fluent-plugin-sumologic-mattk42 -v 0.0.4 && \
fluent-gem install --no-document influxdb -v 0.3.2 && \
Expand Down
29 changes: 24 additions & 5 deletions rootfs/opt/fluentd/sbin/stores/elastic_search
Original file line number Diff line number Diff line change
Expand Up @@ -9,6 +9,18 @@ FLUENTD_BUFFER_CHUNK_LIMIT=${FLUENTD_BUFFER_CHUNK_LIMIT:-8m}
FLUENTD_BUFFER_QUEUE_LIMIT=${FLUENTD_BUFFER_QUEUE_LIMIT:-8192}
FLUENTD_BUFFER_TYPE=${FLUENTD_BUFFER_TYPE:-memory}
FLUENTD_BUFFER_PATH=${FLUENTD_BUFFER_PATH:-/var/fluentd/buffer}
ELASTICSEARCH_LOGSTASH_FORMAT=${ELASTICSEARCH_LOGSTASH_FORMAT:-true}
# ELASTICSEARCH_LOGSTASH_PREFIX=${ELASTICSEARCH_LOGSTASH_PREFIX:-"logstash"}
# ELASTICSEARCH_TARGET_INDEX_KEY=${TARGET_INDEX_KEY:-""}
# ELASTICSEARCH_TARGET_TYPE_KEY=${TARGET_TYPE_KEY:-""}
# ELASTICSEARCH_INCLUDE_TAG_KEY=${INCLUDE_TAG_KEY:-false}
# ELASTICSEARCH_INDEX_NAME=${ELASTICSEARCH_INDEX_NAME:-"fluentd"}
# ELASTICSEARCH_TIME_KEY=${ELASTICSEARCH_TIME_KEY:-""}
# ELASTICSEARCH_TIME_KEY_FORMAT=${ELASTICSEARCH_TIME_KEY_FORMAT:-""}
# ELASTICSEARCH_TIME_KEY_EXCLUDE_TIMESTAMP=${ELASTICSEARCH_TIME_KEY_EXCLUDE_TIMESTAMP:-""}




if [ -n "$ELASTICSEARCH_HOST" ]
then
Expand All @@ -19,20 +31,27 @@ then
cat << EOF >> $FLUENTD_CONF
<store>
@type elasticsearch
include_tag_key true
time_key time
host ${ELASTICSEARCH_HOST}
port ${ELASTICSEARCH_PORT}
scheme ${ELASTICSEARCH_SCHEME}
$([ -n "${ELASTICSEARCH_SCHEME}" ] && echo scheme ${ELASTICSEARCH_SCHEME})
$([ -n "${ELASTICSEARCH_PORT}" ] && echo port ${ELASTICSEARCH_PORT})
$([ -n "${ELASTICSEARCH_USER}" ] && echo user ${ELASTICSEARCH_USER})
$([ -n "${ELASTICSEARCH_PASSWORD}" ] && echo password ${ELASTICSEARCH_PASSWORD})
$([ -n "$ELASTICSEARCH_TIME_KEY_FORMAT" ] && echo time_key_format ${ELASTICSEARCH_TIME_KEY_FORMAT})
$([ -n "$ELASTICSEARCH_TIME_KEY" ] && echo time_key ${ELASTICSEARCH_TIME_KEY})
$([ -n "$ELASTICSEARCH_TIME_KEY_EXCLUDE_TIMESTAMP" ] && echo time_key_exclude_timestamp ${ELASTICSEARCH_TIME_KEY_EXCLUDE_TIMESTAMP})
$([ -n "$ELASTICSEARCH_LOGSTASH_PREFIX" ] && echo logstash_prefix ${ELASTICSEARCH_LOGSTASH_PREFIX})
$([ -n "$ELASTICSEARCH_INDEX_NAME" ] && echo index_name ${ELASTICSEARCH_INDEX_NAME})
$([ -n "$ELASTICSEARCH_INCLUDE_TAG_KEY" ] && echo include_tag_key ${ELASTICSEARCH_INCLUDE_TAG_KEY})
$([ -n "$ELASTICSEARCH_TARGET_INDEX_KEY" ] && echo target_index_key ${ELASTICSEARCH_TARGET_INDEX_KEY})
$([ -n "$ELASTICSEARCH_TARGET_TYPE_KEY" ] && echo target_type_key ${ELASTICSEARCH_TARGET_TYPE_KEY})
logstash_format ${ELASTICSEARCH_LOGSTASH_FORMAT}
buffer_type ${FLUENTD_BUFFER_TYPE}
$([ "${FLUENTD_BUFFER_TYPE}" == "file" ] && echo buffer_path ${FLUENTD_BUFFER_PATH})
$([ "${FLUENTD_DISABLE_RETRY_LIMIT}" == "true" ] && echo disable_retry_limit)
buffer_chunk_limit ${FLUENTD_BUFFER_CHUNK_LIMIT}
buffer_queue_limit ${FLUENTD_BUFFER_QUEUE_LIMIT}
flush_interval ${FLUENTD_FLUSH_INTERVAL}
retry_limit ${FLUENTD_RETRY_LIMIT}
$([ "${FLUENTD_DISABLE_RETRY_LIMIT}" == "true" ] && echo disable_retry_limit)
retry_wait ${FLUENTD_RETRY_WAIT}
max_retry_wait ${FLUENTD_MAX_RETRY_WAIT}
num_threads ${FLUENTD_FLUSH_THREADS}
Expand Down
Loading

0 comments on commit f113aac

Please sign in to comment.