Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Update ELK to "proper" images #15808

Merged
merged 1 commit into from
Dec 12, 2023
Merged

Update ELK to "proper" images #15808

merged 1 commit into from
Dec 12, 2023

Conversation

tianon
Copy link
Member

@tianon tianon commented Nov 30, 2023

This is a concrete test of what I was suggesting in #15753 (comment) (not intended for merge as-is)

This updates them to point to the original source repository so they build directly instead of being a special case, and updates the maintainers appropriately.

See #15753 (comment) and following conversation 👍

This comment has been minimized.

This comment has been minimized.

This updates them to point to the original source repository so they build directly instead of being a special case, and updates the maintainers appropriately.

This comment has been minimized.

@tianon tianon changed the title Draft: testing ELK as "proper" images Update ELK to "proper" images Dec 7, 2023
@tianon tianon marked this pull request as ready for review December 7, 2023 00:56
Copy link

github-actions bot commented Dec 7, 2023

Diff for 91cbef7:
diff --git a/_bashbrew-cat b/_bashbrew-cat
index 0a1c19b..cfea52a 100644
--- a/_bashbrew-cat
+++ b/_bashbrew-cat
@@ -1,43 +1,49 @@
 # elasticsearch
-Maintainers: Tianon Gravi <[email protected]> (@tianon), Joseph Ferguson <[email protected]> (@yosifkit)
-GitRepo: https://github.com/docker-library/elasticsearch.git
+Maintainers: Mark Vieira (@mark-vieira)
+GitRepo: https://github.com/elastic/dockerfiles.git
+Directory: elasticsearch
+Builder: buildkit
 
 Tags: 7.17.15
 Architectures: amd64, arm64v8
-GitCommit: 917e6014bd129e3e2193e2ec52245c9f599e1e84
-Directory: 7
+GitFetch: refs/heads/7.17
+GitCommit: 63269da8548948c20f3e1650cebbed1ae5eee90e
 
 Tags: 8.11.1
 Architectures: amd64, arm64v8
-GitCommit: 84a62adaf958d51f39376ed636a7d59f2e92ca69
-Directory: 8
+GitFetch: refs/heads/8.11
+GitCommit: 0dbf0d4297a2bf48d0cd980935431c121396885d
 
 
 # kibana
-Maintainers: Tianon Gravi <[email protected]> (@tianon), Joseph Ferguson <[email protected]> (@yosifkit)
-GitRepo: https://github.com/docker-library/kibana.git
+Maintainers: Thomas Watson (@watson)
+GitRepo: https://github.com/elastic/dockerfiles.git
+Directory: kibana
+Builder: buildkit
 
 Tags: 7.17.15
 Architectures: amd64, arm64v8
-GitCommit: 9c6a9d1c949c08a7254e2ff66594e25c4be54da7
-Directory: 7
+GitFetch: refs/heads/7.17
+GitCommit: 63269da8548948c20f3e1650cebbed1ae5eee90e
 
 Tags: 8.11.1
 Architectures: amd64, arm64v8
-GitCommit: d86e69a854eb6bb1c4b4b33d629b784eb6ca539d
-Directory: 8
+GitFetch: refs/heads/8.11
+GitCommit: 0dbf0d4297a2bf48d0cd980935431c121396885d
 
 
 # logstash
-Maintainers: Tianon Gravi <[email protected]> (@tianon), Joseph Ferguson <[email protected]> (@yosifkit)
-GitRepo: https://github.com/docker-library/logstash.git
+Maintainers: João Duarte (@jsvd)
+GitRepo: https://github.com/elastic/dockerfiles.git
+Directory: logstash
+Builder: buildkit
 
 Tags: 7.17.15
 Architectures: amd64, arm64v8
-GitCommit: e8bb7b091d72d4592901eb24379c9dde4975775f
-Directory: 7
+GitFetch: refs/heads/7.17
+GitCommit: 63269da8548948c20f3e1650cebbed1ae5eee90e
 
 Tags: 8.11.1
 Architectures: amd64, arm64v8
-GitCommit: 7866ca9dec9586e2e4a1a8d19a2958dfac93cede
-Directory: 8
+GitFetch: refs/heads/8.11
+GitCommit: 0dbf0d4297a2bf48d0cd980935431c121396885d
diff --git a/elasticsearch_7.17.15/Dockerfile b/elasticsearch_7.17.15/Dockerfile
index e870bd9..7c17947 100644
--- a/elasticsearch_7.17.15/Dockerfile
+++ b/elasticsearch_7.17.15/Dockerfile
@@ -1,17 +1,165 @@
-# Elasticsearch 7.17.15
+################################################################################
+# This Dockerfile was generated from the template at distribution/src/docker/Dockerfile
+#
+# Beginning of multi stage Dockerfile
+################################################################################
 
-# This image re-bundles the Docker image from the upstream provider, Elastic.
-FROM docker.elastic.co/elasticsearch/elasticsearch:7.17.15@sha256:24a334f30e8730cd520b7f6dfd3ec0f39efb3c9732f69880de222002299597aa
-# Supported Bashbrew Architectures: amd64 arm64v8
+################################################################################
+# Build stage 0 `builder`:
+# Extract Elasticsearch artifact
+################################################################################
+FROM ubuntu:20.04 AS builder
 
-# The upstream image was built by:
-#   https://github.com/elastic/dockerfiles/tree/v7.17.15/elasticsearch
+# Install required packages to extract the Elasticsearch distribution
 
-# The build can be reproduced locally via:
-#   docker build 'https://github.com/elastic/dockerfiles.git#v7.17.15:elasticsearch'
+RUN for iter in 1 2 3 4 5 6 7 8 9 10; do \
+      apt-get update && DEBIAN_FRONTEND=noninteractive apt-get install -y curl  && \
+      exit_code=0 && break || \
+        exit_code=$? && echo "apt-get error: retry $iter in 10s" && sleep 10; \
+    done; \
+    exit $exit_code
 
-# For a full list of supported images and tags visit https://www.docker.elastic.co
+# `tini` is a tiny but valid init for containers. This is used to cleanly
+# control how ES and any child processes are shut down.
+#
+# The tini GitHub page gives instructions for verifying the binary using
+# gpg, but the keyservers are slow to return the key and this can fail the
+# build. Instead, we check the binary against the published checksum.
+RUN set -eux ; \
+    tini_bin="" ; \
+    case "$(arch)" in \
+        aarch64) tini_bin='tini-arm64' ;; \
+        x86_64)  tini_bin='tini-amd64' ;; \
+        *) echo >&2 ; echo >&2 "Unsupported architecture $(arch)" ; echo >&2 ; exit 1 ;; \
+    esac ; \
+    curl --retry 10 -S -L -O https://github.com/krallin/tini/releases/download/v0.19.0/${tini_bin} ; \
+    curl --retry 10 -S -L -O https://github.com/krallin/tini/releases/download/v0.19.0/${tini_bin}.sha256sum ; \
+    sha256sum -c ${tini_bin}.sha256sum ; \
+    rm ${tini_bin}.sha256sum ; \
+    mv ${tini_bin} /bin/tini ; \
+    chmod 0555 /bin/tini
 
-# For Elasticsearch documentation visit https://www.elastic.co/guide/en/elasticsearch/reference/current/docker.html
+RUN mkdir /usr/share/elasticsearch
+WORKDIR /usr/share/elasticsearch
 
-# See https://github.com/docker-library/official-images/pull/4916 for more details.
+RUN curl --retry 10 -S -L --output /tmp/elasticsearch.tar.gz https://artifacts-no-kpi.elastic.co/downloads/elasticsearch/elasticsearch-7.17.15-linux-$(arch).tar.gz
+
+RUN tar -zxf /tmp/elasticsearch.tar.gz --strip-components=1
+
+# The distribution includes a `config` directory, no need to create it
+COPY config/elasticsearch.yml config/
+COPY config/log4j2.properties config/log4j2.docker.properties
+
+#  1. Configure the distribution for Docker
+#  2. Create required directory
+#  3. Move the distribution's default logging config aside
+#  4. Move the generated docker logging config so that it is the default
+#  5. Reset permissions on all directories
+#  6. Reset permissions on all files
+#  7. Make CLI tools executable
+#  8. Make some directories writable. `bin` must be writable because
+#     plugins can install their own CLI utilities.
+#  9. Make some files writable
+RUN sed -i -e 's/ES_DISTRIBUTION_TYPE=tar/ES_DISTRIBUTION_TYPE=docker/' bin/elasticsearch-env && \
+    mkdir data && \
+    mv config/log4j2.properties config/log4j2.file.properties && \
+    mv config/log4j2.docker.properties config/log4j2.properties && \
+    find . -type d -exec chmod 0555 {} + && \
+    find . -type f -exec chmod 0444 {} + && \
+    chmod 0555 bin/* jdk/bin/* jdk/lib/jspawnhelper modules/x-pack-ml/platform/linux-*/bin/* && \
+    chmod 0775 bin config config/jvm.options.d data logs plugins && \
+    find config -type f -exec chmod 0664 {} +
+
+################################################################################
+# Build stage 1 (the actual Elasticsearch image):
+#
+# Copy elasticsearch from stage 0
+# Add entrypoint
+################################################################################
+
+FROM ubuntu:20.04
+
+# Change default shell to bash, then install required packages with retries.
+RUN yes no | dpkg-reconfigure dash && \
+    for iter in 1 2 3 4 5 6 7 8 9 10; do \
+      export DEBIAN_FRONTEND=noninteractive && \
+      apt-get update && \
+      apt-get upgrade -y && \
+      apt-get install -y --no-install-recommends \
+        ca-certificates curl netcat p11-kit unzip zip && \
+      apt-get clean && \
+      rm -rf /var/lib/apt/lists/* && \
+      exit_code=0 && break || \
+        exit_code=$? && echo "apt-get error: retry $iter in 10s" && sleep 10; \
+    done; \
+    exit $exit_code
+
+RUN groupadd -g 1000 elasticsearch && \
+    adduser --uid 1000 --gid 1000 --home /usr/share/elasticsearch elasticsearch && \
+    adduser elasticsearch root && \
+    chown -R 0:0 /usr/share/elasticsearch
+
+ENV ELASTIC_CONTAINER true
+
+WORKDIR /usr/share/elasticsearch
+
+COPY --from=builder --chown=0:0 /usr/share/elasticsearch /usr/share/elasticsearch
+COPY --from=builder --chown=0:0 /bin/tini /bin/tini
+
+ENV PATH /usr/share/elasticsearch/bin:$PATH
+
+COPY bin/docker-entrypoint.sh /usr/local/bin/docker-entrypoint.sh
+
+# 1. Sync the user and group permissions of /etc/passwd
+# 2. Set correct permissions of the entrypoint
+# 3. Ensure that there are no files with setuid or setgid, in order to mitigate "stackclash" attacks.
+#    We've already run this in previous layers so it ought to be a no-op.
+# 4. Replace OpenJDK's built-in CA certificate keystore with the one from the OS
+#    vendor. The latter is superior in several ways.
+#    REF: https://github.com/elastic/elasticsearch-docker/issues/171
+# 5. Tighten up permissions on the ES home dir (the permissions of the contents are handled earlier)
+# 6. You can't install plugins that include configuration when running as `elasticsearch` and the `config`
+#    dir is owned by `root`, because the installed tries to manipulate the permissions on the plugin's
+#    config directory.
+RUN chmod g=u /etc/passwd && \
+    chmod 0555 /usr/local/bin/docker-entrypoint.sh && \
+    find / -xdev -perm -4000 -exec chmod ug-s {} + && \
+    chmod 0775 /usr/share/elasticsearch && \
+    chown elasticsearch bin config config/jvm.options.d data logs plugins
+
+# Update "cacerts" bundle to use Ubuntu's CA certificates (and make sure it
+# stays up-to-date with changes to Ubuntu's store)
+COPY bin/docker-openjdk /etc/ca-certificates/update.d/docker-openjdk
+RUN /etc/ca-certificates/update.d/docker-openjdk
+
+EXPOSE 9200 9300
+
+LABEL org.label-schema.build-date="2023-11-10T22:03:46.987399016Z" \
+  org.label-schema.license="Elastic-License-2.0" \
+  org.label-schema.name="Elasticsearch" \
+  org.label-schema.schema-version="1.0" \
+  org.label-schema.url="https://www.elastic.co/products/elasticsearch" \
+  org.label-schema.usage="https://www.elastic.co/guide/en/elasticsearch/reference/index.html" \
+  org.label-schema.vcs-ref="0b8ecfb4378335f4689c4223d1f1115f16bef3ba" \
+  org.label-schema.vcs-url="https://github.com/elastic/elasticsearch" \
+  org.label-schema.vendor="Elastic" \
+  org.label-schema.version="7.17.15" \
+  org.opencontainers.image.created="2023-11-10T22:03:46.987399016Z" \
+  org.opencontainers.image.documentation="https://www.elastic.co/guide/en/elasticsearch/reference/index.html" \
+  org.opencontainers.image.licenses="Elastic-License-2.0" \
+  org.opencontainers.image.revision="0b8ecfb4378335f4689c4223d1f1115f16bef3ba" \
+  org.opencontainers.image.source="https://github.com/elastic/elasticsearch" \
+  org.opencontainers.image.title="Elasticsearch" \
+  org.opencontainers.image.url="https://www.elastic.co/products/elasticsearch" \
+  org.opencontainers.image.vendor="Elastic" \
+  org.opencontainers.image.version="7.17.15"
+
+# Our actual entrypoint is `tini`, a minimal but functional init program. It
+# calls the entrypoint we provide, while correctly forwarding signals.
+ENTRYPOINT ["/bin/tini", "--", "/usr/local/bin/docker-entrypoint.sh"]
+# Dummy overridable parameter parsed by entrypoint
+CMD ["eswrapper"]
+
+################################################################################
+# End of multi-stage Dockerfile
+################################################################################
diff --git a/elasticsearch_7.17.15/bin/docker-entrypoint.sh b/elasticsearch_7.17.15/bin/docker-entrypoint.sh
new file mode 100755
index 0000000..eeb9832
--- /dev/null
+++ b/elasticsearch_7.17.15/bin/docker-entrypoint.sh
@@ -0,0 +1,101 @@
+#!/bin/bash
+set -e
+
+# Files created by Elasticsearch should always be group writable too
+umask 0002
+
+run_as_other_user_if_needed() {
+  if [[ "$(id -u)" == "0" ]]; then
+    # If running as root, drop to specified UID and run command
+    exec chroot --userspec=1000:0 / "${@}"
+  else
+    # Either we are running in Openshift with random uid and are a member of the root group
+    # or with a custom --user
+    exec "${@}"
+  fi
+}
+
+# Allow user specify custom CMD, maybe bin/elasticsearch itself
+# for example to directly specify `-E` style parameters for elasticsearch on k8s
+# or simply to run /bin/bash to check the image
+if [[ "$1" != "eswrapper" ]]; then
+  if [[ "$(id -u)" == "0" && $(basename "$1") == "elasticsearch" ]]; then
+    # centos:7 chroot doesn't have the `--skip-chdir` option and
+    # changes our CWD.
+    # Rewrite CMD args to replace $1 with `elasticsearch` explicitly,
+    # so that we are backwards compatible with the docs
+    # from the previous Elasticsearch versions<6
+    # and configuration option D:
+    # https://www.elastic.co/guide/en/elasticsearch/reference/5.6/docker.html#_d_override_the_image_8217_s_default_ulink_url_https_docs_docker_com_engine_reference_run_cmd_default_command_or_options_cmd_ulink
+    # Without this, user could specify `elasticsearch -E x.y=z` but
+    # `bin/elasticsearch -E x.y=z` would not work.
+    set -- "elasticsearch" "${@:2}"
+    # Use chroot to switch to UID 1000 / GID 0
+    exec chroot --userspec=1000:0 / "$@"
+  else
+    # User probably wants to run something else, like /bin/bash, with another uid forced (Openshift?)
+    exec "$@"
+  fi
+fi
+
+# Allow environment variables to be set by creating a file with the
+# contents, and setting an environment variable with the suffix _FILE to
+# point to it. This can be used to provide secrets to a container, without
+# the values being specified explicitly when running the container.
+#
+# This is also sourced in elasticsearch-env, and is only needed here
+# as well because we use ELASTIC_PASSWORD below. Sourcing this script
+# is idempotent.
+source /usr/share/elasticsearch/bin/elasticsearch-env-from-file
+
+if [[ -f bin/elasticsearch-users ]]; then
+  # Check for the ELASTIC_PASSWORD environment variable to set the
+  # bootstrap password for Security.
+  #
+  # This is only required for the first node in a cluster with Security
+  # enabled, but we have no way of knowing which node we are yet. We'll just
+  # honor the variable if it's present.
+  if [[ -n "$ELASTIC_PASSWORD" ]]; then
+    [[ -f /usr/share/elasticsearch/config/elasticsearch.keystore ]] || (run_as_other_user_if_needed elasticsearch-keystore create)
+    if ! (run_as_other_user_if_needed elasticsearch-keystore has-passwd --silent) ; then
+      # keystore is unencrypted
+      if ! (run_as_other_user_if_needed elasticsearch-keystore list | grep -q '^bootstrap.password$'); then
+        (run_as_other_user_if_needed echo "$ELASTIC_PASSWORD" | elasticsearch-keystore add -x 'bootstrap.password')
+      fi
+    else
+      # keystore requires password
+      if ! (run_as_other_user_if_needed echo "$KEYSTORE_PASSWORD" \
+          | elasticsearch-keystore list | grep -q '^bootstrap.password$') ; then
+        COMMANDS="$(printf "%s\n%s" "$KEYSTORE_PASSWORD" "$ELASTIC_PASSWORD")"
+        (run_as_other_user_if_needed echo "$COMMANDS" | elasticsearch-keystore add -x 'bootstrap.password')
+      fi
+    fi
+  fi
+fi
+
+if [[ "$(id -u)" == "0" ]]; then
+  # If requested and running as root, mutate the ownership of bind-mounts
+  if [[ -n "$TAKE_FILE_OWNERSHIP" ]]; then
+    chown -R 1000:0 /usr/share/elasticsearch/{data,logs}
+  fi
+fi
+
+if [[ -n "$ES_LOG_STYLE" ]]; then
+  case "$ES_LOG_STYLE" in
+    console)
+      # This is the default. Nothing to do.
+      ;;
+    file)
+      # Overwrite the default config with the stack config. Do this as a
+      # copy, not a move, in case the container is restarted.
+      cp -f /usr/share/elasticsearch/config/log4j2.file.properties /usr/share/elasticsearch/config/log4j2.properties
+      ;;
+    *)
+      echo "ERROR: ES_LOG_STYLE set to [$ES_LOG_STYLE]. Expected [console] or [file]" >&2
+      exit 1 ;;
+  esac
+fi
+
+# Signal forwarding and child reaping is handled by `tini`, which is the
+# actual entrypoint of the container
+run_as_other_user_if_needed /usr/share/elasticsearch/bin/elasticsearch <<<"$KEYSTORE_PASSWORD"
diff --git a/elasticsearch_7.17.15/bin/docker-openjdk b/elasticsearch_7.17.15/bin/docker-openjdk
new file mode 100755
index 0000000..f3fe072
--- /dev/null
+++ b/elasticsearch_7.17.15/bin/docker-openjdk
@@ -0,0 +1,13 @@
+#!/usr/bin/env bash
+
+set -Eeuo pipefail
+
+# Update "cacerts" bundle to use Ubuntu's CA certificates (and make sure it
+# stays up-to-date with changes to Ubuntu's store)
+
+trust extract \
+  --overwrite \
+  --format=java-cacerts \
+  --filter=ca-anchors \
+  --purpose=server-auth \
+  /usr/share/elasticsearch/jdk/lib/security/cacerts
diff --git a/elasticsearch_7.17.15/config/elasticsearch.yml b/elasticsearch_7.17.15/config/elasticsearch.yml
new file mode 100644
index 0000000..50b1547
--- /dev/null
+++ b/elasticsearch_7.17.15/config/elasticsearch.yml
@@ -0,0 +1,2 @@
+cluster.name: "docker-cluster"
+network.host: 0.0.0.0
diff --git a/elasticsearch_7.17.15/config/log4j2.properties b/elasticsearch_7.17.15/config/log4j2.properties
new file mode 100644
index 0000000..b46562d
--- /dev/null
+++ b/elasticsearch_7.17.15/config/log4j2.properties
@@ -0,0 +1,159 @@
+status = error
+
+######## Server JSON ############################
+appender.rolling.type = Console
+appender.rolling.name = rolling
+appender.rolling.layout.type = ESJsonLayout
+appender.rolling.layout.type_name = server
+
+################################################
+
+################################################
+
+rootLogger.level = info
+rootLogger.appenderRef.rolling.ref = rolling
+
+######## Deprecation JSON #######################
+appender.deprecation_rolling.type = Console
+appender.deprecation_rolling.name = deprecation_rolling
+appender.deprecation_rolling.layout.type = ESJsonLayout
+appender.deprecation_rolling.layout.type_name = deprecation.elasticsearch
+appender.deprecation_rolling.layout.esmessagefields=x-opaque-id,key,category,elasticsearch.elastic_product_origin
+appender.deprecation_rolling.filter.rate_limit.type = RateLimitingFilter
+
+appender.header_warning.type = HeaderWarningAppender
+appender.header_warning.name = header_warning
+#################################################
+
+#################################################
+logger.deprecation.name = org.elasticsearch.deprecation
+logger.deprecation.level = WARN
+logger.deprecation.appenderRef.deprecation_rolling.ref = deprecation_rolling
+logger.deprecation.appenderRef.header_warning.ref = header_warning
+logger.deprecation.additivity = false
+
+######## Search slowlog JSON ####################
+appender.index_search_slowlog_rolling.type = Console
+appender.index_search_slowlog_rolling.name = index_search_slowlog_rolling
+appender.index_search_slowlog_rolling.layout.type = ESJsonLayout
+appender.index_search_slowlog_rolling.layout.type_name = index_search_slowlog
+appender.index_search_slowlog_rolling.layout.esmessagefields=message,took,took_millis,total_hits,types,stats,search_type,total_shards,source,id
+
+#################################################
+
+#################################################
+logger.index_search_slowlog_rolling.name = index.search.slowlog
+logger.index_search_slowlog_rolling.level = trace
+logger.index_search_slowlog_rolling.appenderRef.index_search_slowlog_rolling.ref = index_search_slowlog_rolling
+logger.index_search_slowlog_rolling.additivity = false
+
+######## Indexing slowlog JSON ##################
+appender.index_indexing_slowlog_rolling.type = Console
+appender.index_indexing_slowlog_rolling.name = index_indexing_slowlog_rolling
+appender.index_indexing_slowlog_rolling.layout.type = ESJsonLayout
+appender.index_indexing_slowlog_rolling.layout.type_name = index_indexing_slowlog
+appender.index_indexing_slowlog_rolling.layout.esmessagefields=message,took,took_millis,doc_type,id,routing,source
+
+#################################################
+
+#################################################
+
+logger.index_indexing_slowlog.name = index.indexing.slowlog.index
+logger.index_indexing_slowlog.level = trace
+logger.index_indexing_slowlog.appenderRef.index_indexing_slowlog_rolling.ref = index_indexing_slowlog_rolling
+logger.index_indexing_slowlog.additivity = false
+
+appender.audit_rolling.type = Console
+appender.audit_rolling.name = audit_rolling
+appender.audit_rolling.layout.type = PatternLayout
+appender.audit_rolling.layout.pattern = {\
+                "type":"audit", \
+                "timestamp":"%d{yyyy-MM-dd'T'HH:mm:ss,SSSZ}"\
+                %varsNotEmpty{, "node.name":"%enc{%map{node.name}}{JSON}"}\
+                %varsNotEmpty{, "node.id":"%enc{%map{node.id}}{JSON}"}\
+                %varsNotEmpty{, "host.name":"%enc{%map{host.name}}{JSON}"}\
+                %varsNotEmpty{, "host.ip":"%enc{%map{host.ip}}{JSON}"}\
+                %varsNotEmpty{, "event.type":"%enc{%map{event.type}}{JSON}"}\
+                %varsNotEmpty{, "event.action":"%enc{%map{event.action}}{JSON}"}\
+                %varsNotEmpty{, "authentication.type":"%enc{%map{authentication.type}}{JSON}"}\
+                %varsNotEmpty{, "user.name":"%enc{%map{user.name}}{JSON}"}\
+                %varsNotEmpty{, "user.run_by.name":"%enc{%map{user.run_by.name}}{JSON}"}\
+                %varsNotEmpty{, "user.run_as.name":"%enc{%map{user.run_as.name}}{JSON}"}\
+                %varsNotEmpty{, "user.realm":"%enc{%map{user.realm}}{JSON}"}\
+                %varsNotEmpty{, "user.run_by.realm":"%enc{%map{user.run_by.realm}}{JSON}"}\
+                %varsNotEmpty{, "user.run_as.realm":"%enc{%map{user.run_as.realm}}{JSON}"}\
+                %varsNotEmpty{, "user.roles":%map{user.roles}}\
+                %varsNotEmpty{, "apikey.id":"%enc{%map{apikey.id}}{JSON}"}\
+                %varsNotEmpty{, "apikey.name":"%enc{%map{apikey.name}}{JSON}"}\
+                %varsNotEmpty{, "authentication.token.name":"%enc{%map{authentication.token.name}}{JSON}"}\
+                %varsNotEmpty{, "authentication.token.type":"%enc{%map{authentication.token.type}}{JSON}"}\
+                %varsNotEmpty{, "origin.type":"%enc{%map{origin.type}}{JSON}"}\
+                %varsNotEmpty{, "origin.address":"%enc{%map{origin.address}}{JSON}"}\
+                %varsNotEmpty{, "realm":"%enc{%map{realm}}{JSON}"}\
+                %varsNotEmpty{, "url.path":"%enc{%map{url.path}}{JSON}"}\
+                %varsNotEmpty{, "url.query":"%enc{%map{url.query}}{JSON}"}\
+                %varsNotEmpty{, "request.method":"%enc{%map{request.method}}{JSON}"}\
+                %varsNotEmpty{, "request.body":"%enc{%map{request.body}}{JSON}"}\
+                %varsNotEmpty{, "request.id":"%enc{%map{request.id}}{JSON}"}\
+                %varsNotEmpty{, "action":"%enc{%map{action}}{JSON}"}\
+                %varsNotEmpty{, "request.name":"%enc{%map{request.name}}{JSON}"}\
+                %varsNotEmpty{, "indices":%map{indices}}\
+                %varsNotEmpty{, "opaque_id":"%enc{%map{opaque_id}}{JSON}"}\
+                %varsNotEmpty{, "trace.id":"%enc{%map{trace.id}}{JSON}"}\
+                %varsNotEmpty{, "x_forwarded_for":"%enc{%map{x_forwarded_for}}{JSON}"}\
+                %varsNotEmpty{, "transport.profile":"%enc{%map{transport.profile}}{JSON}"}\
+                %varsNotEmpty{, "rule":"%enc{%map{rule}}{JSON}"}\
+                %varsNotEmpty{, "put":%map{put}}\
+                %varsNotEmpty{, "delete":%map{delete}}\
+                %varsNotEmpty{, "change":%map{change}}\
+                %varsNotEmpty{, "create":%map{create}}\
+                %varsNotEmpty{, "invalidate":%map{invalidate}}\
+                }%n
+# "node.name" node name from the `elasticsearch.yml` settings
+# "node.id" node id which should not change between cluster restarts
+# "host.name" unresolved hostname of the local node
+# "host.ip" the local bound ip (i.e. the ip listening for connections)
+# "origin.type" a received REST request is translated into one or more transport requests. This indicates which processing layer generated the event "rest" or "transport" (internal)
+# "event.action" the name of the audited event, eg. "authentication_failed", "access_granted", "run_as_granted", etc.
+# "authentication.type" one of "realm", "api_key", "token", "anonymous" or "internal"
+# "user.name" the subject name as authenticated by a realm
+# "user.run_by.name" the original authenticated subject name that is impersonating another one.
+# "user.run_as.name" if this "event.action" is of a run_as type, this is the subject name to be impersonated as.
+# "user.realm" the name of the realm that authenticated "user.name"
+# "user.run_by.realm" the realm name of the impersonating subject ("user.run_by.name")
+# "user.run_as.realm" if this "event.action" is of a run_as type, this is the realm name the impersonated user is looked up from
+# "user.roles" the roles array of the user; these are the roles that are granting privileges
+# "apikey.id" this field is present if and only if the "authentication.type" is "api_key"
+# "apikey.name" this field is present if and only if the "authentication.type" is "api_key"
+# "authentication.token.name" this field is present if and only if the authenticating credential is a service account token
+# "authentication.token.type" this field is present if and only if the authenticating credential is a service account token
+# "event.type" informs about what internal system generated the event; possible values are "rest", "transport", "ip_filter" and "security_config_change"
+# "origin.address" the remote address and port of the first network hop, i.e. a REST proxy or another cluster node
+# "realm" name of a realm that has generated an "authentication_failed" or an "authentication_successful"; the subject is not yet authenticated
+# "url.path" the URI component between the port and the query string; it is percent (URL) encoded
+# "url.query" the URI component after the path and before the fragment; it is percent (URL) encoded
+# "request.method" the method of the HTTP request, i.e. one of GET, POST, PUT, DELETE, OPTIONS, HEAD, PATCH, TRACE, CONNECT
+# "request.body" the content of the request body entity, JSON escaped
+# "request.id" a synthetic identifier for the incoming request, this is unique per incoming request, and consistent across all audit events generated by that request
+# "action" an action is the most granular operation that is authorized and this identifies it in a namespaced way (internal)
+# "request.name" if the event is in connection to a transport message this is the name of the request class, similar to how rest requests are identified by the url path (internal)
+# "indices" the array of indices that the "action" is acting upon
+# "opaque_id" opaque value conveyed by the "X-Opaque-Id" request header
+# "trace_id" an identifier conveyed by the part of "traceparent" request header
+# "x_forwarded_for" the addresses from the "X-Forwarded-For" request header, as a verbatim string value (not an array)
+# "transport.profile" name of the transport profile in case this is a "connection_granted" or "connection_denied" event
+# "rule" name of the applied rule if the "origin.type" is "ip_filter"
+# the "put", "delete", "change", "create", "invalidate" fields are only present
+# when the "event.type" is "security_config_change" and contain the security config change (as an object) taking effect
+
+logger.xpack_security_audit_logfile.name = org.elasticsearch.xpack.security.audit.logfile.LoggingAuditTrail
+logger.xpack_security_audit_logfile.level = info
+logger.xpack_security_audit_logfile.appenderRef.audit_rolling.ref = audit_rolling
+logger.xpack_security_audit_logfile.additivity = false
+
+logger.xmlsig.name = org.apache.xml.security.signature.XMLSignature
+logger.xmlsig.level = error
+logger.samlxml_decrypt.name = org.opensaml.xmlsec.encryption.support.Decrypter
+logger.samlxml_decrypt.level = fatal
+logger.saml2_decrypt.name = org.opensaml.saml.saml2.encryption.Decrypter
+logger.saml2_decrypt.level = fatal
\ No newline at end of file
diff --git a/elasticsearch_8.11.1/Dockerfile b/elasticsearch_8.11.1/Dockerfile
index 676f79c..1ce8117 100644
--- a/elasticsearch_8.11.1/Dockerfile
+++ b/elasticsearch_8.11.1/Dockerfile
@@ -1,17 +1,168 @@
-# Elasticsearch 8.11.1
+################################################################################
+# This Dockerfile was generated from the template at distribution/src/docker/Dockerfile
+#
+# Beginning of multi stage Dockerfile
+################################################################################
 
-# This image re-bundles the Docker image from the upstream provider, Elastic.
-FROM docker.elastic.co/elasticsearch/elasticsearch:8.11.1@sha256:cf3edd6518b0159d50c0f932f6cacd63930db01e1fb740499eca477543d42b34
-# Supported Bashbrew Architectures: amd64 arm64v8
+################################################################################
+# Build stage 1 `builder`:
+# Extract Elasticsearch artifact
+################################################################################
 
-# The upstream image was built by:
-#   https://github.com/elastic/dockerfiles/tree/v8.11.1/elasticsearch
+FROM ubuntu:20.04 AS builder
 
-# The build can be reproduced locally via:
-#   docker build 'https://github.com/elastic/dockerfiles.git#v8.11.1:elasticsearch'
+# Install required packages to extract the Elasticsearch distribution
 
-# For a full list of supported images and tags visit https://www.docker.elastic.co
+RUN for iter in 1 2 3 4 5 6 7 8 9 10; do \
+      apt-get update && DEBIAN_FRONTEND=noninteractive apt-get install -y curl  && \
+      exit_code=0 && break || \
+        exit_code=$? && echo "apt-get error: retry $iter in 10s" && sleep 10; \
+    done; \
+    exit $exit_code
 
-# For Elasticsearch documentation visit https://www.elastic.co/guide/en/elasticsearch/reference/current/docker.html
+# `tini` is a tiny but valid init for containers. This is used to cleanly
+# control how ES and any child processes are shut down.
+#
+# The tini GitHub page gives instructions for verifying the binary using
+# gpg, but the keyservers are slow to return the key and this can fail the
+# build. Instead, we check the binary against the published checksum.
+RUN set -eux ; \
+    tini_bin="" ; \
+    case "$(arch)" in \
+        aarch64) tini_bin='tini-arm64' ;; \
+        x86_64)  tini_bin='tini-amd64' ;; \
+        *) echo >&2 ; echo >&2 "Unsupported architecture $(arch)" ; echo >&2 ; exit 1 ;; \
+    esac ; \
+    curl --retry 10 -S -L -O https://github.com/krallin/tini/releases/download/v0.19.0/${tini_bin} ; \
+    curl --retry 10 -S -L -O https://github.com/krallin/tini/releases/download/v0.19.0/${tini_bin}.sha256sum ; \
+    sha256sum -c ${tini_bin}.sha256sum ; \
+    rm ${tini_bin}.sha256sum ; \
+    mv ${tini_bin} /bin/tini ; \
+    chmod 0555 /bin/tini
 
-# See https://github.com/docker-library/official-images/pull/4916 for more details.
+RUN mkdir /usr/share/elasticsearch
+WORKDIR /usr/share/elasticsearch
+
+RUN curl --retry 10 -S -L --output /tmp/elasticsearch.tar.gz https://artifacts-no-kpi.elastic.co/downloads/elasticsearch/elasticsearch-8.11.1-linux-$(arch).tar.gz
+
+RUN tar -zxf /tmp/elasticsearch.tar.gz --strip-components=1
+
+# The distribution includes a `config` directory, no need to create it
+COPY config/elasticsearch.yml config/
+COPY config/log4j2.properties config/log4j2.docker.properties
+
+#  1. Configure the distribution for Docker
+#  2. Create required directory
+#  3. Move the distribution's default logging config aside
+#  4. Move the generated docker logging config so that it is the default
+#  5. Reset permissions on all directories
+#  6. Reset permissions on all files
+#  7. Make CLI tools executable
+#  8. Make some directories writable. `bin` must be writable because
+#     plugins can install their own CLI utilities.
+#  9. Make some files writable
+RUN sed -i -e 's/ES_DISTRIBUTION_TYPE=tar/ES_DISTRIBUTION_TYPE=docker/' bin/elasticsearch-env && \
+    mkdir data && \
+    mv config/log4j2.properties config/log4j2.file.properties && \
+    mv config/log4j2.docker.properties config/log4j2.properties && \
+    find . -type d -exec chmod 0555 {} + && \
+    find . -type f -exec chmod 0444 {} + && \
+    chmod 0555 bin/* jdk/bin/* jdk/lib/jspawnhelper modules/x-pack-ml/platform/linux-*/bin/* && \
+    chmod 0775 bin config config/jvm.options.d data logs plugins && \
+    find config -type f -exec chmod 0664 {} +
+
+################################################################################
+# Build stage 2 (the actual Elasticsearch image):
+#
+# Copy elasticsearch from stage 1
+# Add entrypoint
+################################################################################
+
+FROM ubuntu:20.04
+
+# Change default shell to bash, then install required packages with retries.
+RUN yes no | dpkg-reconfigure dash && \
+    for iter in 1 2 3 4 5 6 7 8 9 10; do \
+      export DEBIAN_FRONTEND=noninteractive && \
+      apt-get update && \
+      apt-get upgrade -y && \
+      apt-get install -y --no-install-recommends \
+        ca-certificates curl netcat p11-kit unzip zip  && \
+      apt-get clean && \
+      rm -rf /var/lib/apt/lists/* && \
+      exit_code=0 && break || \
+        exit_code=$? && echo "apt-get error: retry $iter in 10s" && sleep 10; \
+    done; \
+    exit $exit_code
+
+RUN groupadd -g 1000 elasticsearch && \
+    adduser --uid 1000 --gid 1000 --home /usr/share/elasticsearch elasticsearch && \
+    adduser elasticsearch root && \
+    chown -R 0:0 /usr/share/elasticsearch
+
+ENV ELASTIC_CONTAINER true
+
+WORKDIR /usr/share/elasticsearch
+
+COPY --from=builder --chown=0:0 /usr/share/elasticsearch /usr/share/elasticsearch
+COPY --from=builder --chown=0:0 /bin/tini /bin/tini
+
+ENV PATH /usr/share/elasticsearch/bin:$PATH
+
+COPY bin/docker-entrypoint.sh /usr/local/bin/docker-entrypoint.sh
+
+# 1. Sync the user and group permissions of /etc/passwd
+# 2. Set correct permissions of the entrypoint
+# 3. Ensure that there are no files with setuid or setgid, in order to mitigate "stackclash" attacks.
+#    We've already run this in previous layers so it ought to be a no-op.
+# 4. Replace OpenJDK's built-in CA certificate keystore with the one from the OS
+#    vendor. The latter is superior in several ways.
+#    REF: https://github.com/elastic/elasticsearch-docker/issues/171
+# 5. Tighten up permissions on the ES home dir (the permissions of the contents are handled earlier)
+# 6. You can't install plugins that include configuration when running as `elasticsearch` and the `config`
+#    dir is owned by `root`, because the installed tries to manipulate the permissions on the plugin's
+#    config directory.
+RUN chmod g=u /etc/passwd && \
+    chmod 0555 /usr/local/bin/docker-entrypoint.sh && \
+    find / -xdev -perm -4000 -exec chmod ug-s {} + && \
+    chmod 0775 /usr/share/elasticsearch && \
+    chown elasticsearch bin config config/jvm.options.d data logs plugins
+
+# Update "cacerts" bundle to use Ubuntu's CA certificates (and make sure it
+# stays up-to-date with changes to Ubuntu's store)
+COPY bin/docker-openjdk /etc/ca-certificates/update.d/docker-openjdk
+RUN /etc/ca-certificates/update.d/docker-openjdk
+
+EXPOSE 9200 9300
+
+LABEL org.label-schema.build-date="2023-11-11T10:05:59.421038163Z" \
+  org.label-schema.license="Elastic-License-2.0" \
+  org.label-schema.name="Elasticsearch" \
+  org.label-schema.schema-version="1.0" \
+  org.label-schema.url="https://www.elastic.co/products/elasticsearch" \
+  org.label-schema.usage="https://www.elastic.co/guide/en/elasticsearch/reference/index.html" \
+  org.label-schema.vcs-ref="6f9ff581fbcde658e6f69d6ce03050f060d1fd0c" \
+  org.label-schema.vcs-url="https://github.com/elastic/elasticsearch" \
+  org.label-schema.vendor="Elastic" \
+  org.label-schema.version="8.11.1" \
+  org.opencontainers.image.created="2023-11-11T10:05:59.421038163Z" \
+  org.opencontainers.image.documentation="https://www.elastic.co/guide/en/elasticsearch/reference/index.html" \
+  org.opencontainers.image.licenses="Elastic-License-2.0" \
+  org.opencontainers.image.revision="6f9ff581fbcde658e6f69d6ce03050f060d1fd0c" \
+  org.opencontainers.image.source="https://github.com/elastic/elasticsearch" \
+  org.opencontainers.image.title="Elasticsearch" \
+  org.opencontainers.image.url="https://www.elastic.co/products/elasticsearch" \
+  org.opencontainers.image.vendor="Elastic" \
+  org.opencontainers.image.version="8.11.1"
+
+# Our actual entrypoint is `tini`, a minimal but functional init program. It
+# calls the entrypoint we provide, while correctly forwarding signals.
+ENTRYPOINT ["/bin/tini", "--", "/usr/local/bin/docker-entrypoint.sh"]
+# Dummy overridable parameter parsed by entrypoint
+CMD ["eswrapper"]
+
+USER 1000:0
+
+################################################################################
+# End of multi-stage Dockerfile
+################################################################################
diff --git a/elasticsearch_8.11.1/bin/docker-entrypoint.sh b/elasticsearch_8.11.1/bin/docker-entrypoint.sh
new file mode 100755
index 0000000..d7b41b8
--- /dev/null
+++ b/elasticsearch_8.11.1/bin/docker-entrypoint.sh
@@ -0,0 +1,84 @@
+#!/bin/bash
+set -e
+
+# Files created by Elasticsearch should always be group writable too
+umask 0002
+
+# Allow user specify custom CMD, maybe bin/elasticsearch itself
+# for example to directly specify `-E` style parameters for elasticsearch on k8s
+# or simply to run /bin/bash to check the image
+if [[ "$1" == "eswrapper" || $(basename "$1") == "elasticsearch" ]]; then
+  # Rewrite CMD args to remove the explicit command,
+  # so that we are backwards compatible with the docs
+  # from the previous Elasticsearch versions < 6
+  # and configuration option:
+  # https://www.elastic.co/guide/en/elasticsearch/reference/5.6/docker.html#_d_override_the_image_8217_s_default_ulink_url_https_docs_docker_com_engine_reference_run_cmd_default_command_or_options_cmd_ulink
+  # Without this, user could specify `elasticsearch -E x.y=z` but
+  # `bin/elasticsearch -E x.y=z` would not work. In any case,
+  # we want to continue through this script, and not exec early.
+  set -- "${@:2}"
+else
+  # Run whatever command the user wanted
+  exec "$@"
+fi
+
+# Allow environment variables to be set by creating a file with the
+# contents, and setting an environment variable with the suffix _FILE to
+# point to it. This can be used to provide secrets to a container, without
+# the values being specified explicitly when running the container.
+#
+# This is also sourced in elasticsearch-env, and is only needed here
+# as well because we use ELASTIC_PASSWORD below. Sourcing this script
+# is idempotent.
+source /usr/share/elasticsearch/bin/elasticsearch-env-from-file
+
+if [[ -f bin/elasticsearch-users ]]; then
+  # Check for the ELASTIC_PASSWORD environment variable to set the
+  # bootstrap password for Security.
+  #
+  # This is only required for the first node in a cluster with Security
+  # enabled, but we have no way of knowing which node we are yet. We'll just
+  # honor the variable if it's present.
+  if [[ -n "$ELASTIC_PASSWORD" ]]; then
+    [[ -f /usr/share/elasticsearch/config/elasticsearch.keystore ]] || (elasticsearch-keystore create)
+    if ! (elasticsearch-keystore has-passwd --silent) ; then
+      # keystore is unencrypted
+      if ! (elasticsearch-keystore list | grep -q '^bootstrap.password$'); then
+        (echo "$ELASTIC_PASSWORD" | elasticsearch-keystore add -x 'bootstrap.password')
+      fi
+    else
+      # keystore requires password
+      if ! (echo "$KEYSTORE_PASSWORD" \
+          | elasticsearch-keystore list | grep -q '^bootstrap.password$') ; then
+        COMMANDS="$(printf "%s\n%s" "$KEYSTORE_PASSWORD" "$ELASTIC_PASSWORD")"
+        (echo "$COMMANDS" | elasticsearch-keystore add -x 'bootstrap.password')
+      fi
+    fi
+  fi
+fi
+
+if [[ -n "$ES_LOG_STYLE" ]]; then
+  case "$ES_LOG_STYLE" in
+    console)
+      # This is the default. Nothing to do.
+      ;;
+    file)
+      # Overwrite the default config with the stack config. Do this as a
+      # copy, not a move, in case the container is restarted.
+      cp -f /usr/share/elasticsearch/config/log4j2.file.properties /usr/share/elasticsearch/config/log4j2.properties
+      ;;
+    *)
+      echo "ERROR: ES_LOG_STYLE set to [$ES_LOG_STYLE]. Expected [console] or [file]" >&2
+      exit 1 ;;
+  esac
+fi
+
+if [[ -n "$ENROLLMENT_TOKEN" ]]; then
+  POSITIONAL_PARAMETERS="--enrollment-token $ENROLLMENT_TOKEN"
+else
+  POSITIONAL_PARAMETERS=""
+fi
+
+# Signal forwarding and child reaping is handled by `tini`, which is the
+# actual entrypoint of the container
+exec /usr/share/elasticsearch/bin/elasticsearch "$@" $POSITIONAL_PARAMETERS <<<"$KEYSTORE_PASSWORD"
diff --git a/elasticsearch_8.11.1/bin/docker-openjdk b/elasticsearch_8.11.1/bin/docker-openjdk
new file mode 100755
index 0000000..f3fe072
--- /dev/null
+++ b/elasticsearch_8.11.1/bin/docker-openjdk
@@ -0,0 +1,13 @@
+#!/usr/bin/env bash
+
+set -Eeuo pipefail
+
+# Update "cacerts" bundle to use Ubuntu's CA certificates (and make sure it
+# stays up-to-date with changes to Ubuntu's store)
+
+trust extract \
+  --overwrite \
+  --format=java-cacerts \
+  --filter=ca-anchors \
+  --purpose=server-auth \
+  /usr/share/elasticsearch/jdk/lib/security/cacerts
diff --git a/elasticsearch_8.11.1/config/elasticsearch.yml b/elasticsearch_8.11.1/config/elasticsearch.yml
new file mode 100644
index 0000000..50b1547
--- /dev/null
+++ b/elasticsearch_8.11.1/config/elasticsearch.yml
@@ -0,0 +1,2 @@
+cluster.name: "docker-cluster"
+network.host: 0.0.0.0
diff --git a/elasticsearch_8.11.1/config/log4j2.properties b/elasticsearch_8.11.1/config/log4j2.properties
new file mode 100644
index 0000000..c0d67c8
--- /dev/null
+++ b/elasticsearch_8.11.1/config/log4j2.properties
@@ -0,0 +1,193 @@
+status = error
+
+######## Server JSON ############################
+appender.rolling.type = Console
+appender.rolling.name = rolling
+appender.rolling.layout.type = ECSJsonLayout
+appender.rolling.layout.dataset = elasticsearch.server
+
+################################################
+
+################################################
+
+rootLogger.level = info
+rootLogger.appenderRef.rolling.ref = rolling
+
+######## Deprecation JSON #######################
+appender.deprecation_rolling.type = Console
+appender.deprecation_rolling.name = deprecation_rolling
+appender.deprecation_rolling.layout.type = ECSJsonLayout
+# Intentionally follows a different pattern to above
+appender.deprecation_rolling.layout.dataset = deprecation.elasticsearch
+appender.deprecation_rolling.filter.rate_limit.type = RateLimitingFilter
+
+appender.header_warning.type = HeaderWarningAppender
+appender.header_warning.name = header_warning
+#################################################
+
+logger.deprecation.name = org.elasticsearch.deprecation
+logger.deprecation.level = WARN
+logger.deprecation.appenderRef.deprecation_rolling.ref = deprecation_rolling
+logger.deprecation.appenderRef.header_warning.ref = header_warning
+logger.deprecation.additivity = false
+
+######## Search slowlog JSON ####################
+appender.index_search_slowlog_rolling.type = Console
+appender.index_search_slowlog_rolling.name = index_search_slowlog_rolling
+appender.index_search_slowlog_rolling.layout.type = ECSJsonLayout
+appender.index_search_slowlog_rolling.layout.dataset = elasticsearch.index_search_slowlog
+
+#################################################
+
+#################################################
+logger.index_search_slowlog_rolling.name = index.search.slowlog
+logger.index_search_slowlog_rolling.level = trace
+logger.index_search_slowlog_rolling.appenderRef.index_search_slowlog_rolling.ref = index_search_slowlog_rolling
+logger.index_search_slowlog_rolling.additivity = false
+
+######## Indexing slowlog JSON ##################
+appender.index_indexing_slowlog_rolling.type = Console
+appender.index_indexing_slowlog_rolling.name = index_indexing_slowlog_rolling
+appender.index_indexing_slowlog_rolling.layout.type = ECSJsonLayout
+appender.index_indexing_slowlog_rolling.layout.dataset = elasticsearch.index_indexing_slowlog
+
+#################################################
+
+logger.index_indexing_slowlog.name = index.indexing.slowlog.index
+logger.index_indexing_slowlog.level = trace
+logger.index_indexing_slowlog.appenderRef.index_indexing_slowlog_rolling.ref = index_indexing_slowlog_rolling
+logger.index_indexing_slowlog.additivity = false
+
+logger.org_apache_pdfbox.name = org.apache.pdfbox
+logger.org_apache_pdfbox.level = off
+
+logger.org_apache_poi.name = org.apache.poi
+logger.org_apache_poi.level = off
+
+logger.org_apache_fontbox.name = org.apache.fontbox
+logger.org_apache_fontbox.level = off
+
+logger.org_apache_xmlbeans.name = org.apache.xmlbeans
+logger.org_apache_xmlbeans.level = off
+
+logger.com_amazonaws.name = com.amazonaws
+logger.com_amazonaws.level = warn
+
+logger.com_amazonaws_jmx_SdkMBeanRegistrySupport.name = com.amazonaws.jmx.SdkMBeanRegistrySupport
+logger.com_amazonaws_jmx_SdkMBeanRegistrySupport.level = error
+
+logger.com_amazonaws_metrics_AwsSdkMetrics.name = com.amazonaws.metrics.AwsSdkMetrics
+logger.com_amazonaws_metrics_AwsSdkMetrics.level = error
+
+logger.com_amazonaws_auth_profile_internal_BasicProfileConfigFileLoader.name = com.amazonaws.auth.profile.internal.BasicProfileConfigFileLoader
+logger.com_amazonaws_auth_profile_internal_BasicProfileConfigFileLoader.level = error
+
+logger.com_amazonaws_services_s3_internal_UseArnRegionResolver.name = com.amazonaws.services.s3.internal.UseArnRegionResolver
+logger.com_amazonaws_services_s3_internal_UseArnRegionResolver.level = error
+
+appender.audit_rolling.type = Console
+appender.audit_rolling.name = audit_rolling
+appender.audit_rolling.layout.type = PatternLayout
+appender.audit_rolling.layout.pattern = {\
+                "type":"audit", \
+                "timestamp":"%d{yyyy-MM-dd'T'HH:mm:ss,SSSZ}"\
+                %varsNotEmpty{, "cluster.name":"%enc{%map{cluster.name}}{JSON}"}\
+                %varsNotEmpty{, "cluster.uuid":"%enc{%map{cluster.uuid}}{JSON}"}\
+                %varsNotEmpty{, "node.name":"%enc{%map{node.name}}{JSON}"}\
+                %varsNotEmpty{, "node.id":"%enc{%map{node.id}}{JSON}"}\
+                %varsNotEmpty{, "host.name":"%enc{%map{host.name}}{JSON}"}\
+                %varsNotEmpty{, "host.ip":"%enc{%map{host.ip}}{JSON}"}\
+                %varsNotEmpty{, "event.type":"%enc{%map{event.type}}{JSON}"}\
+                %varsNotEmpty{, "event.action":"%enc{%map{event.action}}{JSON}"}\
+                %varsNotEmpty{, "authentication.type":"%enc{%map{authentication.type}}{JSON}"}\
+                %varsNotEmpty{, "user.name":"%enc{%map{user.name}}{JSON}"}\
+                %varsNotEmpty{, "user.run_by.name":"%enc{%map{user.run_by.name}}{JSON}"}\
+                %varsNotEmpty{, "user.run_as.name":"%enc{%map{user.run_as.name}}{JSON}"}\
+                %varsNotEmpty{, "user.realm":"%enc{%map{user.realm}}{JSON}"}\
+                %varsNotEmpty{, "user.realm_domain":"%enc{%map{user.realm_domain}}{JSON}"}\
+                %varsNotEmpty{, "user.run_by.realm":"%enc{%map{user.run_by.realm}}{JSON}"}\
+                %varsNotEmpty{, "user.run_by.realm_domain":"%enc{%map{user.run_by.realm_domain}}{JSON}"}\
+                %varsNotEmpty{, "user.run_as.realm":"%enc{%map{user.run_as.realm}}{JSON}"}\
+                %varsNotEmpty{, "user.run_as.realm_domain":"%enc{%map{user.run_as.realm_domain}}{JSON}"}\
+                %varsNotEmpty{, "user.roles":%map{user.roles}}\
+                %varsNotEmpty{, "apikey.id":"%enc{%map{apikey.id}}{JSON}"}\
+                %varsNotEmpty{, "apikey.name":"%enc{%map{apikey.name}}{JSON}"}\
+                %varsNotEmpty{, "authentication.token.name":"%enc{%map{authentication.token.name}}{JSON}"}\
+                %varsNotEmpty{, "authentication.token.type":"%enc{%map{authentication.token.type}}{JSON}"}\
+                %varsNotEmpty{, "cross_cluster_access":%map{cross_cluster_access}}\
+                %varsNotEmpty{, "origin.type":"%enc{%map{origin.type}}{JSON}"}\
+                %varsNotEmpty{, "origin.address":"%enc{%map{origin.address}}{JSON}"}\
+                %varsNotEmpty{, "realm":"%enc{%map{realm}}{JSON}"}\
+                %varsNotEmpty{, "realm_domain":"%enc{%map{realm_domain}}{JSON}"}\
+                %varsNotEmpty{, "url.path":"%enc{%map{url.path}}{JSON}"}\
+                %varsNotEmpty{, "url.query":"%enc{%map{url.query}}{JSON}"}\
+                %varsNotEmpty{, "request.method":"%enc{%map{request.method}}{JSON}"}\
+                %varsNotEmpty{, "request.body":"%enc{%map{request.body}}{JSON}"}\
+                %varsNotEmpty{, "request.id":"%enc{%map{request.id}}{JSON}"}\
+                %varsNotEmpty{, "action":"%enc{%map{action}}{JSON}"}\
+                %varsNotEmpty{, "request.name":"%enc{%map{request.name}}{JSON}"}\
+                %varsNotEmpty{, "indices":%map{indices}}\
+                %varsNotEmpty{, "opaque_id":"%enc{%map{opaque_id}}{JSON}"}\
+                %varsNotEmpty{, "trace.id":"%enc{%map{trace.id}}{JSON}"}\
+                %varsNotEmpty{, "x_forwarded_for":"%enc{%map{x_forwarded_for}}{JSON}"}\
+                %varsNotEmpty{, "transport.profile":"%enc{%map{transport.profile}}{JSON}"}\
+                %varsNotEmpty{, "rule":"%enc{%map{rule}}{JSON}"}\
+                %varsNotEmpty{, "put":%map{put}}\
+                %varsNotEmpty{, "delete":%map{delete}}\
+                %varsNotEmpty{, "change":%map{change}}\
+                %varsNotEmpty{, "create":%map{create}}\
+                %varsNotEmpty{, "invalidate":%map{invalidate}}\
+                }%n
+# "node.name" node name from the `elasticsearch.yml` settings
+# "node.id" node id which should not change between cluster restarts
+# "host.name" unresolved hostname of the local node
+# "host.ip" the local bound ip (i.e. the ip listening for connections)
+# "origin.type" a received REST request is translated into one or more transport requests. This indicates which processing layer generated the event "rest" or "transport" (internal)
+# "event.action" the name of the audited event, eg. "authentication_failed", "access_granted", "run_as_granted", etc.
+# "authentication.type" one of "realm", "api_key", "token", "anonymous" or "internal"
+# "user.name" the subject name as authenticated by a realm
+# "user.run_by.name" the original authenticated subject name that is impersonating another one.
+# "user.run_as.name" if this "event.action" is of a run_as type, this is the subject name to be impersonated as.
+# "user.realm" the name of the realm that authenticated "user.name"
+# "user.realm_domain" if "user.realm" is under a domain, this is the name of the domain
+# "user.run_by.realm" the realm name of the impersonating subject ("user.run_by.name")
+# "user.run_by.realm_domain" if "user.run_by.realm" is under a domain, this is the name of the domain
+# "user.run_as.realm" if this "event.action" is of a run_as type, this is the realm name the impersonated user is looked up from
+# "user.run_as.realm_domain" if "user.run_as.realm" is under a domain, this is the name of the domain
+# "user.roles" the roles array of the user; these are the roles that are granting privileges
+# "apikey.id" this field is present if and only if the "authentication.type" is "api_key"
+# "apikey.name" this field is present if and only if the "authentication.type" is "api_key"
+# "authentication.token.name" this field is present if and only if the authenticating credential is a service account token
+# "authentication.token.type" this field is present if and only if the authenticating credential is a service account token
+# "cross_cluster_access" this field is present if and only if the associated authentication occurred cross cluster
+# "event.type" informs about what internal system generated the event; possible values are "rest", "transport", "ip_filter" and "security_config_change"
+# "origin.address" the remote address and port of the first network hop, i.e. a REST proxy or another cluster node
+# "realm" name of a realm that has generated an "authentication_failed" or an "authentication_successful"; the subject is not yet authenticated
+# "realm_domain" if "realm" is under a domain, this is the name of the domain
+# "url.path" the URI component between the port and the query string; it is percent (URL) encoded
+# "url.query" the URI component after the path and before the fragment; it is percent (URL) encoded
+# "request.method" the method of the HTTP request, i.e. one of GET, POST, PUT, DELETE, OPTIONS, HEAD, PATCH, TRACE, CONNECT
+# "request.body" the content of the request body entity, JSON escaped
+# "request.id" a synthetic identifier for the incoming request, this is unique per incoming request, and consistent across all audit events generated by that request
+# "action" an action is the most granular operation that is authorized and this identifies it in a namespaced way (internal)
+# "request.name" if the event is in connection to a transport message this is the name of the request class, similar to how rest requests are identified by the url path (internal)
+# "indices" the array of indices that the "action" is acting upon
+# "opaque_id" opaque value conveyed by the "X-Opaque-Id" request header
+# "trace_id" an identifier conveyed by the part of "traceparent" request header
+# "x_forwarded_for" the addresses from the "X-Forwarded-For" request header, as a verbatim string value (not an array)
+# "transport.profile" name of the transport profile in case this is a "connection_granted" or "connection_denied" event
+# "rule" name of the applied rule if the "origin.type" is "ip_filter"
+# the "put", "delete", "change", "create", "invalidate" fields are only present
+# when the "event.type" is "security_config_change" and contain the security config change (as an object) taking effect
+
+logger.xpack_security_audit_logfile.name = org.elasticsearch.xpack.security.audit.logfile.LoggingAuditTrail
+logger.xpack_security_audit_logfile.level = info
+logger.xpack_security_audit_logfile.appenderRef.audit_rolling.ref = audit_rolling
+logger.xpack_security_audit_logfile.additivity = false
+
+logger.xmlsig.name = org.apache.xml.security.signature.XMLSignature
+logger.xmlsig.level = error
+logger.samlxml_decrypt.name = org.opensaml.xmlsec.encryption.support.Decrypter
+logger.samlxml_decrypt.level = fatal
+logger.saml2_decrypt.name = org.opensaml.saml.saml2.encryption.Decrypter
+logger.saml2_decrypt.level = fatal
\ No newline at end of file
diff --git a/kibana_7.17.15/Dockerfile b/kibana_7.17.15/Dockerfile
index 714fe6b..8907634 100644
--- a/kibana_7.17.15/Dockerfile
+++ b/kibana_7.17.15/Dockerfile
@@ -1,17 +1,134 @@
-# Kibana 7.17.15
+################################################################################
+# This Dockerfile was generated from the template at:
+#   src/dev/build/tasks/os_packages/docker_generator/templates/Dockerfile
+#
+# Beginning of multi stage Dockerfile
+################################################################################
 
-# This image re-bundles the Docker image from the upstream provider, Elastic.
-FROM docker.elastic.co/kibana/kibana:7.17.15@sha256:abe396c387d596282797c9b34190bffaf98e65ea020f15deb70630c7992e937a
-# Supported Bashbrew Architectures: amd64 arm64v8
+################################################################################
+# Build stage 0 `builder`:
+# Extract Kibana artifact
+################################################################################
+FROM ubuntu:20.04 AS builder
 
-# The upstream image was built by:
-#   https://github.com/elastic/dockerfiles/tree/v7.17.15/kibana
+RUN apt-get update && DEBIAN_FRONTEND=noninteractive apt-get install -y curl
 
-# The build can be reproduced locally via:
-#   docker build 'https://github.com/elastic/dockerfiles.git#v7.17.15:kibana'
+RUN cd /tmp && \
+  curl --retry 8 -s -L \
+    --output kibana.tar.gz \
+     https://artifacts.elastic.co/downloads/kibana/kibana-7.17.15-linux-$(arch).tar.gz && \
+  cd -
 
-# For a full list of supported images and tags visit https://www.docker.elastic.co
 
-# For documentation visit https://www.elastic.co/guide/en/kibana/current/docker.html
+RUN mkdir /usr/share/kibana
+WORKDIR /usr/share/kibana
+RUN tar --strip-components=1 -zxf /tmp/kibana.tar.gz
+# Ensure that group permissions are the same as user permissions.
+# This will help when relying on GID-0 to run Kibana, rather than UID-1000.
+# OpenShift does this, for example.
+# REF: https://docs.openshift.org/latest/creating_images/guidelines.html
+RUN chmod -R g=u /usr/share/kibana
 
-# See https://github.com/docker-library/official-images/pull/4917 for more details.
+
+################################################################################
+# Build stage 1 (the actual Kibana image):
+#
+# Copy kibana from stage 0
+# Add entrypoint
+################################################################################
+FROM ubuntu:20.04
+EXPOSE 5601
+
+RUN for iter in {1..10}; do \
+      export DEBIAN_FRONTEND=noninteractive && \
+      apt-get update  && \
+      apt-get upgrade -y  && \
+      apt-get install -y --no-install-recommends \
+       fontconfig fonts-liberation libnss3 libfontconfig1 ca-certificates curl && \
+      apt-get clean && \
+      rm -rf /var/lib/apt/lists/* && exit_code=0 && break || exit_code=$? && echo "apt-get error: retry $iter in 10s" && \
+      sleep 10; \
+    done; \
+    (exit $exit_code)
+
+# Add an init process, check the checksum to make sure it's a match
+RUN set -e ; \
+    TINI_BIN="" ; \
+    case "$(arch)" in \
+        aarch64) \
+            TINI_BIN='tini-arm64' ; \
+            ;; \
+        x86_64) \
+            TINI_BIN='tini-amd64' ; \
+            ;; \
+        *) echo >&2 "Unsupported architecture $(arch)" ; exit 1 ;; \
+    esac ; \
+  TINI_VERSION='v0.19.0' ; \
+  curl --retry 8 -S -L -O "https://github.com/krallin/tini/releases/download/${TINI_VERSION}/${TINI_BIN}" ; \
+  curl --retry 8 -S -L -O "https://github.com/krallin/tini/releases/download/${TINI_VERSION}/${TINI_BIN}.sha256sum" ; \
+  sha256sum -c "${TINI_BIN}.sha256sum" ; \
+  rm "${TINI_BIN}.sha256sum" ; \
+  mv "${TINI_BIN}" /bin/tini ; \
+  chmod +x /bin/tini
+
+RUN mkdir /usr/share/fonts/local
+RUN curl --retry 8 -S -L -o /usr/share/fonts/local/NotoSansCJK-Regular.ttc https://github.com/googlefonts/noto-cjk/raw/NotoSansV2.001/NotoSansCJK-Regular.ttc
+RUN echo "5dcd1c336cc9344cb77c03a0cd8982ca8a7dc97d620fd6c9c434e02dcb1ceeb3  /usr/share/fonts/local/NotoSansCJK-Regular.ttc" | sha256sum -c -
+RUN fc-cache -v
+
+# Bring in Kibana from the initial stage.
+COPY --from=builder --chown=1000:0 /usr/share/kibana /usr/share/kibana
+WORKDIR /usr/share/kibana
+RUN ln -s /usr/share/kibana /opt/kibana
+
+ENV ELASTIC_CONTAINER true
+ENV PATH=/usr/share/kibana/bin:$PATH
+
+# Set some Kibana configuration defaults.
+COPY --chown=1000:0 config/kibana.yml /usr/share/kibana/config/kibana.yml
+
+# Add the launcher/wrapper script. It knows how to interpret environment
+# variables and translate them to Kibana CLI options.
+COPY bin/kibana-docker /usr/local/bin/
+
+# Ensure gid 0 write permissions for OpenShift.
+RUN chmod g+ws /usr/share/kibana && \
+    find /usr/share/kibana -gid 0 -and -not -perm /g+w -exec chmod g+w {} \;
+
+# Remove the suid bit everywhere to mitigate "Stack Clash"
+RUN find / -xdev -perm -4000 -exec chmod u-s {} +
+
+# Provide a non-root user to run the process.
+RUN groupadd --gid 1000 kibana && \
+    useradd --uid 1000 --gid 1000 -G 0 \
+      --home-dir /usr/share/kibana --no-create-home \
+      kibana
+
+LABEL org.label-schema.build-date="2023-11-10T20:46:53.174Z" \
+  org.label-schema.license="Elastic License" \
+  org.label-schema.name="Kibana" \
+  org.label-schema.schema-version="1.0" \
+  org.label-schema.url="https://www.elastic.co/products/kibana" \
+  org.label-schema.usage="https://www.elastic.co/guide/en/kibana/reference/index.html" \
+  org.label-schema.vcs-ref="60e0d4fa38a2c99350f1533c141f641edbb8e608" \
+  org.label-schema.vcs-url="https://github.com/elastic/kibana" \
+  org.label-schema.vendor="Elastic" \
+  org.label-schema.version="7.17.15" \
+  org.opencontainers.image.created="2023-11-10T20:46:53.174Z" \
+  org.opencontainers.image.documentation="https://www.elastic.co/guide/en/kibana/reference/index.html" \
+  org.opencontainers.image.licenses="Elastic License" \
+  org.opencontainers.image.revision="60e0d4fa38a2c99350f1533c141f641edbb8e608" \
+  org.opencontainers.image.source="https://github.com/elastic/kibana" \
+  org.opencontainers.image.title="Kibana" \
+  org.opencontainers.image.url="https://www.elastic.co/products/kibana" \
+  org.opencontainers.image.vendor="Elastic" \
+  org.opencontainers.image.version="7.17.15"
+
+
+ENTRYPOINT ["/bin/tini", "--"]
+
+
+CMD ["/usr/local/bin/kibana-docker"]
+
+
+USER kibana
diff --git a/kibana_7.17.15/bin/kibana-docker b/kibana_7.17.15/bin/kibana-docker
new file mode 100755
index 0000000..08627dd
--- /dev/null
+++ b/kibana_7.17.15/bin/kibana-docker
@@ -0,0 +1,460 @@
+#!/bin/bash
+#
+# ** THIS IS AN AUTO-GENERATED FILE **
+#
+
+# Run Kibana, using environment variables to set longopts defining Kibana's
+# configuration.
+#
+# eg. Setting the environment variable:
+#
+#       ELASTICSEARCH_LOGQUERIES=true
+#
+# will cause Kibana to be invoked with:
+#
+#       --elasticsearch.logQueries=true
+
+kibana_vars=(
+    apm_oss.apmAgentConfigurationIndex
+    apm_oss.errorIndices
+    apm_oss.indexPattern
+    apm_oss.metricsIndices
+    apm_oss.onboardingIndices
+    apm_oss.sourcemapIndices
+    apm_oss.spanIndices
+    apm_oss.transactionIndices
+    console.enabled
+    console.proxyConfig
+    console.proxyFilter
+    cpu.cgroup.path.override
+    cpuacct.cgroup.path.override
+    csp.rules
+    csp.strict
+    csp.warnLegacyBrowsers
+    csp.script_src
+    csp.worker_src
+    csp.style_src
+    csp.connect_src
+    csp.default_src
+    csp.font_src
+    csp.frame_src
+    csp.img_src
+    csp.frame_ancestors
+    csp.report_uri
+    csp.report_to
+    data.autocomplete.valueSuggestions.terminateAfter
+    data.autocomplete.valueSuggestions.timeout
+    elasticsearch.customHeaders
+    elasticsearch.hosts
+    elasticsearch.logQueries
+    elasticsearch.password
+    elasticsearch.pingTimeout
+    elasticsearch.requestHeadersWhitelist
+    elasticsearch.requestTimeout
+    elasticsearch.serviceAccountToken
+    elasticsearch.shardTimeout
+    elasticsearch.sniffInterval
+    elasticsearch.sniffOnConnectionFault
+    elasticsearch.sniffOnStart
+    elasticsearch.ssl.alwaysPresentCertificate
+    elasticsearch.ssl.certificate
+    elasticsearch.ssl.certificateAuthorities
+    elasticsearch.ssl.key
+    elasticsearch.ssl.keyPassphrase
+    elasticsearch.ssl.keystore.password
+    elasticsearch.ssl.keystore.path
+    elasticsearch.ssl.truststore.password
+    elasticsearch.ssl.truststore.path
+    elasticsearch.ssl.verificationMode
+    elasticsearch.username
+    enterpriseSearch.accessCheckTimeout
+    enterpriseSearch.accessCheckTimeoutWarning
+    enterpriseSearch.enabled
+    enterpriseSearch.host
+    externalUrl.policy
+    i18n.locale
+    interpreter.enableInVisualize
+    kibana.autocompleteTerminateAfter
+    kibana.autocompleteTimeout
+    kibana.defaultAppId
+    kibana.index
+    logging.appenders
+    logging.appenders.console
+    logging.appenders.file
+    logging.dest
+    logging.json
+    logging.loggers
+    logging.loggers.appenders
+    logging.loggers.level
+    logging.loggers.name
+    logging.quiet
+    logging.root
+    logging.root.appenders
+    logging.root.level
+    logging.rotate.enabled
+    logging.rotate.everyBytes
+    logging.rotate.keepFiles
+    logging.rotate.pollingInterval
+    logging.rotate.usePolling
+    logging.silent
+    logging.useUTC
+    logging.verbose
+    map.includeElasticMapsService
+    map.proxyElasticMapsServiceInMaps
+    map.regionmap
+    map.tilemap.options.attribution
+    map.tilemap.options.maxZoom
+    map.tilemap.options.minZoom
+    map.tilemap.options.subdomains
+    map.tilemap.url
+    migrations.batchSize
+    migrations.maxBatchSizeBytes
+    migrations.pollInterval
+    migrations.retryAttempts
+    migrations.scrollDuration
+    migrations.skip
+    monitoring.cluster_alerts.email_notifications.email_address
+    monitoring.enabled
+    monitoring.kibana.collection.enabled
+    monitoring.kibana.collection.interval
+    monitoring.ui.container.elasticsearch.enabled
+    monitoring.ui.container.logstash.enabled
+    monitoring.ui.elasticsearch.hosts
+    monitoring.ui.elasticsearch.logFetchCount
+    monitoring.ui.elasticsearch.password
+    monitoring.ui.elasticsearch.pingTimeout
+    monitoring.ui.elasticsearch.ssl.certificateAuthorities
+    monitoring.ui.elasticsearch.ssl.verificationMode
+    monitoring.ui.elasticsearch.username
+    monitoring.ui.enabled
+    monitoring.ui.logs.index
+    monitoring.ui.max_bucket_size
+    monitoring.ui.min_interval_seconds
+    newsfeed.enabled
+    ops.cGroupOverrides.cpuAcctPath
+    ops.cGroupOverrides.cpuPath
+    ops.interval
+    path.data
+    pid.file
+    regionmap
+    savedObjects.maxImportExportSize
+    savedObjects.maxImportPayloadBytes
+    security.showInsecureClusterWarning
+    server.basePath
+    server.compression.enabled
+    server.compression.referrerWhitelist
+    server.cors
+    server.cors.allowCredentials
+    server.cors.allowOrigin
+    server.cors.enabled
+    server.cors.origin
+    server.customResponseHeaders
+    server.defaultRoute
+    server.host
+    server.keepAliveTimeout
+    server.maxPayload
+    server.maxPayloadBytes
+    server.name
+    server.port
+    server.publicBaseUrl
+    server.requestId.allowFromAnyIp
+    server.requestId.ipAllowlist
+    server.rewriteBasePath
+    server.securityResponseHeaders.disableEmbedding
+    server.securityResponseHeaders.permissionsPolicy
+    server.securityResponseHeaders.referrerPolicy
+    server.securityResponseHeaders.strictTransportSecurity
+    server.securityResponseHeaders.xContentTypeOptions
+    server.shutdownTimeout
+    server.socketTimeout
+    server.ssl.cert
+    server.ssl.certificate
+    server.ssl.certificateAuthorities
+    server.ssl.cipherSuites
+    server.ssl.clientAuthentication
+    server.ssl.enabled
+    server.ssl.key
+    server.ssl.keyPassphrase
+    server.ssl.keystore.password
+    server.ssl.keystore.path
+    server.ssl.redirectHttpFromPort
+    server.ssl.supportedProtocols
+    server.ssl.truststore.password
+    server.ssl.truststore.path
+    server.uuid
+    server.xsrf.allowlist
+    server.xsrf.disableProtection
+    server.xsrf.whitelist
+    status.allowAnonymous
+    status.v6ApiFormat
+    telemetry.allowChangingOptInStatus
+    telemetry.enabled
+    telemetry.optIn
+    telemetry.optInStatusUrl
+    telemetry.sendUsageTo
+    telemetry.sendUsageFrom
+    tilemap.options.attribution
+    tilemap.options.maxZoom
+    tilemap.options.minZoom
+    tilemap.options.subdomains
+    tilemap.url
+    url_drilldown.enabled
+    vega.enableExternalUrls
+    vis_type_vega.enableExternalUrls
+    xpack.actions.allowedHosts
+    xpack.actions.customHostSettings
+    xpack.actions.enabled
+    xpack.actions.email.domain_allowlist
+    xpack.actions.enabledActionTypes
+    xpack.actions.maxResponseContentLength
+    xpack.actions.preconfigured
+    xpack.actions.preconfiguredAlertHistoryEsIndex
+    xpack.actions.proxyBypassHosts
+    xpack.actions.proxyHeaders
+    xpack.actions.proxyOnlyHosts
+    xpack.actions.proxyRejectUnauthorizedCertificates
+    xpack.actions.proxyUrl
+    xpack.actions.rejectUnauthorized
+    xpack.actions.responseTimeout
+    xpack.actions.ssl.proxyVerificationMode
+    xpack.actions.ssl.verificationMode
+    xpack.alerting.healthCheck.interval
+    xpack.alerting.invalidateApiKeysTask.interval
+    xpack.alerting.invalidateApiKeysTask.removalDelay
+    xpack.alerting.defaultRuleTaskTimeout
+    xpack.alerts.healthCheck.interval
+    xpack.alerts.invalidateApiKeysTask.interval
+    xpack.alerts.invalidateApiKeysTask.removalDelay
+    xpack.apm.enabled
+    xpack.apm.indices.error
+    xpack.apm.indices.metric
+    xpack.apm.indices.onboarding
+    xpack.apm.indices.sourcemap
+    xpack.apm.indices.span
+    xpack.apm.indices.transaction
+    xpack.apm.maxServiceEnvironments
+    xpack.apm.searchAggregatedTransactions
+    xpack.apm.serviceMapEnabled
+    xpack.apm.serviceMapFingerprintBucketSize
+    xpack.apm.serviceMapFingerprintGlobalBucketSize
+    xpack.apm.ui.enabled
+    xpack.apm.ui.maxTraceItems
+    xpack.apm.ui.transactionGroupBucketSize
+    xpack.banners.backgroundColor
+    xpack.banners.disableSpaceBanners
+    xpack.banners.placement
+    xpack.banners.textColor
+    xpack.banners.textContent
+    xpack.canvas.enabled
+    xpack.code.disk.thresholdEnabled
+    xpack.code.disk.watermarkLow
+    xpack.code.indexRepoFrequencyMs
+    xpack.code.lsp.verbose
+    xpack.code.maxWorkspace
+    xpack.code.security.enableGitCertCheck
+    xpack.code.security.gitHostWhitelist
+    xpack.code.security.gitProtocolWhitelist
+    xpack.code.ui.enabled
+    xpack.code.updateRepoFrequencyMs
+    xpack.code.verbose
+    xpack.data_enhanced.search.sessions.defaultExpiration
+    xpack.data_enhanced.search.sessions.enabled
+    xpack.data_enhanced.search.sessions.maxUpdateRetries
+    xpack.data_enhanced.search.sessions.notTouchedInProgressTimeout
+    xpack.data_enhanced.search.sessions.notTouchedTimeout
+    xpack.data_enhanced.search.sessions.pageSize
+    xpack.data_enhanced.search.sessions.trackingInterval
+    xpack.discoverEnhanced.actions.exploreDataInChart.enabled
+    xpack.discoverEnhanced.actions.exploreDataInContextMenu.enabled
+    xpack.encryptedSavedObjects.encryptionKey
+    xpack.encryptedSavedObjects.keyRotation.decryptionOnlyKeys
+    xpack.event_log.enabled
+    xpack.event_log.indexEntries
+    xpack.event_log.logEntries
+    xpack.fleet.agentPolicies
+    xpack.fleet.agents.elasticsearch.host
+    xpack.fleet.agents.elasticsearch.hosts
+    xpack.fleet.agents.enabled
+    xpack.fleet.agents.fleet_server.hosts
+    xpack.fleet.agents.kibana.host
+    xpack.fleet.agents.tlsCheckDisabled
+    xpack.fleet.enabled
+    xpack.fleet.packages
+    xpack.fleet.registryProxyUrl
+    xpack.fleet.registryUrl
+    xpack.graph.canEditDrillDownUrls
+    xpack.graph.enabled
+    xpack.graph.savePolicy
+    xpack.grokdebugger.enabled
+    xpack.infra.enabled
+    xpack.infra.query.partitionFactor
+    xpack.infra.query.partitionSize
+    xpack.infra.sources.default.fields.container
+    xpack.infra.sources.default.fields.host
+    xpack.infra.sources.default.fields.message
+    xpack.infra.sources.default.fields.pod
+    xpack.infra.sources.default.fields.tiebreaker
+    xpack.infra.sources.default.fields.timestamp
+    xpack.infra.sources.default.logAlias
+    xpack.infra.sources.default.metricAlias
+    xpack.ingestManager.fleet.tlsCheckDisabled
+    xpack.ingestManager.registryUrl
+    xpack.license_management.enabled
+    xpack.maps.enabled
+    xpack.maps.showMapVisualizationTypes
+    xpack.ml.enabled
+    xpack.observability.annotations.index
+    xpack.observability.unsafe.alertingExperience.enabled
+    xpack.observability.unsafe.cases.enabled
+    xpack.painless_lab.enabled
+    xpack.reporting.capture.browser.autoDownload
+    xpack.reporting.capture.browser.chromium.disableSandbox
+    xpack.reporting.capture.browser.chromium.inspect
+    xpack.reporting.capture.browser.chromium.maxScreenshotDimension
+    xpack.reporting.capture.browser.chromium.proxy.bypass
+    xpack.reporting.capture.browser.chromium.proxy.enabled
+    xpack.reporting.capture.browser.chromium.proxy.server
+    xpack.reporting.capture.browser.type
+    xpack.reporting.capture.concurrency
+    xpack.reporting.capture.loadDelay
+    xpack.reporting.capture.maxAttempts
+    xpack.reporting.capture.networkPolicy
+    xpack.reporting.capture.settleTime
+    xpack.reporting.capture.timeout
+    xpack.reporting.capture.timeouts.openUrl
+    xpack.reporting.capture.timeouts.openUrl
+    xpack.reporting.capture.timeouts.renderComplete
+    xpack.reporting.capture.timeouts.waitForElements
+    xpack.reporting.capture.viewport.height
+    xpack.reporting.capture.viewport.width
+    xpack.reporting.capture.zoom
+    xpack.reporting.csv.checkForFormulas
+    xpack.reporting.csv.enablePanelActionDownload
+    xpack.reporting.csv.escapeFormulaValues
+    xpack.reporting.csv.maxSizeBytes
+    xpack.reporting.csv.scroll.duration
+    xpack.reporting.csv.scroll.size
+    xpack.reporting.csv.useByteOrderMarkEncoding
+    xpack.reporting.enabled
+    xpack.reporting.encryptionKey
+    xpack.reporting.index
+    xpack.reporting.kibanaApp
+    xpack.reporting.kibanaServer.hostname
+    xpack.reporting.kibanaServer.port
+    xpack.reporting.kibanaServer.protocol
+    xpack.reporting.poll.jobCompletionNotifier.interval
+    xpack.reporting.poll.jobCompletionNotifier.intervalErrorMultiplier
+    xpack.reporting.poll.jobsRefresh.interval
+    xpack.reporting.poll.jobsRefresh.intervalErrorMultiplier
+    xpack.reporting.queue.indexInterval
+    xpack.reporting.queue.pollEnabled
+    xpack.reporting.queue.pollInterval
+    xpack.reporting.queue.pollIntervalErrorMultiplier
+    xpack.reporting.queue.timeout
+    xpack.reporting.roles.allow
+    xpack.reporting.roles.enabled
+    xpack.rollup.enabled
+    xpack.ruleRegistry.write.enabled
+    xpack.searchprofiler.enabled
+    xpack.security.audit.appender.fileName
+    xpack.security.audit.appender.layout.highlight
+    xpack.security.audit.appender.layout.pattern
+    xpack.security.audit.appender.layout.type
+    xpack.security.audit.appender.legacyLoggingConfig
+    xpack.security.audit.appender.policy.interval
+    xpack.security.audit.appender.policy.modulate
+    xpack.security.audit.appender.policy.size
+    xpack.security.audit.appender.policy.type
+    xpack.security.audit.appender.strategy.max
+    xpack.security.audit.appender.strategy.pattern
+    xpack.security.audit.appender.strategy.type
+    xpack.security.audit.appender.type
+    xpack.security.audit.enabled
+    xpack.security.audit.ignore_filters
+    xpack.security.authc.http.autoSchemesEnabled
+    xpack.security.authc.http.enabled
+    xpack.security.authc.http.schemes
+    xpack.security.authc.oidc.realm
+    xpack.security.authc.providers
+    xpack.security.authc.saml.maxRedirectURLSize
+    xpack.security.authc.saml.realm
+    xpack.security.authc.selector.enabled
+    xpack.security.authProviders
+    xpack.security.cookieName
+    xpack.security.enabled
+    xpack.security.encryptionKey
+    xpack.security.loginAssistanceMessage
+    xpack.security.loginAssistanceMessage
+    xpack.security.loginHelp
+    xpack.security.public.hostname
+    xpack.security.public.port
+    xpack.security.public.protocol
+    xpack.security.sameSiteCookies
+    xpack.security.secureCookies
+    xpack.security.session.cleanupInterval
+    xpack.security.session.idleTimeout
+    xpack.security.session.lifespan
+    xpack.security.sessionTimeout
+    xpack.security.showInsecureClusterWarning
+    xpack.securitySolution.alertMergeStrategy
+    xpack.securitySolution.alertIgnoreFields
+    xpack.securitySolution.endpointResultListDefaultFirstPageIndex
+    xpack.securitySolution.endpointResultListDefaultPageSize
+    xpack.securitySolution.maxRuleImportExportSize
+    xpack.securitySolution.maxRuleImportPayloadBytes
+    xpack.securitySolution.maxTimelineImportExportSize
+    xpack.securitySolution.maxTimelineImportPayloadBytes
+    xpack.securitySolution.packagerTaskInterval
+    xpack.securitySolution.prebuiltRulesFromFileSystem
+    xpack.securitySolution.prebuiltRulesFromSavedObjects
+    xpack.spaces.enabled
+    xpack.spaces.maxSpaces
+    xpack.task_manager.enabled
+    xpack.task_manager.index
+    xpack.task_manager.max_attempts
+    xpack.task_manager.max_poll_inactivity_cycles
+    xpack.task_manager.max_workers
+    xpack.task_manager.monitored_aggregated_stats_refresh_rate
+    xpack.task_manager.monitored_stats_required_freshness
+    xpack.task_manager.monitored_stats_running_average_window
+    xpack.task_manager.monitored_stats_health_verbose_log.enabled
+    xpack.task_manager.monitored_stats_health_verbose_log.warn_delayed_task_start_in_seconds
+    xpack.task_manager.monitored_task_execution_thresholds
+    xpack.task_manager.poll_interval
+    xpack.task_manager.request_capacity
+    xpack.task_manager.version_conflict_threshold
+    xpack.task_manager.event_loop_delay.monitor
+    xpack.task_manager.event_loop_delay.warn_threshold
+)
+
+longopts=''
+for kibana_var in ${kibana_vars[*]}; do
+    # 'elasticsearch.hosts' -> 'ELASTICSEARCH_HOSTS'
+    env_var=$(echo ${kibana_var^^} | tr . _)
+
+    # Indirectly lookup env var values via the name of the var.
+    # REF: http://tldp.org/LDP/abs/html/bashver2.html#EX78
+    value=${!env_var}
+    if [[ -n $value ]]; then
+      longopt="--${kibana_var}=${value}"
+      longopts+=" ${longopt}"
+    fi
+done
+
+# Files created at run-time should be group-writable, for Openshift's sake.
+umask 0002
+
+# The virtual file /proc/self/cgroup should list the current cgroup
+# membership. For each hierarchy, you can follow the cgroup path from
+# this file to the cgroup filesystem (usually /sys/fs/cgroup/) and
+# introspect the statistics for the cgroup for the given
+# hierarchy. Alas, Docker breaks this by mounting the container
+# statistics at the root while leaving the cgroup paths as the actual
+# paths. Therefore, Kibana provides a mechanism to override
+# reading the cgroup path from /proc/self/cgroup and instead uses the
+# cgroup path defined the configuration properties
+# cpu.cgroup.path.override and cpuacct.cgroup.path.override.
+# Therefore, we set this value here so that cgroup statistics are
+# available for the container this process will run in.
+
+exec /usr/share/kibana/bin/kibana --ops.cGroupOverrides.cpuPath=/ --ops.cGroupOverrides.cpuAcctPath=/ ${longopts} "$@"
diff --git a/kibana_7.17.15/config/kibana.yml b/kibana_7.17.15/config/kibana.yml
new file mode 100644
index 0000000..230ba1c
--- /dev/null
+++ b/kibana_7.17.15/config/kibana.yml
@@ -0,0 +1,9 @@
+#
+# ** THIS IS AN AUTO-GENERATED FILE **
+#
+
+# Default Kibana configuration for docker target
+server.host: "0.0.0.0"
+server.shutdownTimeout: "5s"
+elasticsearch.hosts: [ "http://elasticsearch:9200" ]
+monitoring.ui.container.elasticsearch.enabled: true
\ No newline at end of file
diff --git a/kibana_8.11.1/Dockerfile b/kibana_8.11.1/Dockerfile
index dd60a2e..2c1d60e 100644
--- a/kibana_8.11.1/Dockerfile
+++ b/kibana_8.11.1/Dockerfile
@@ -1,17 +1,134 @@
-# Kibana 8.11.1
+################################################################################
+# This Dockerfile was generated from the template at:
+#   src/dev/build/tasks/os_packages/docker_generator/templates/Dockerfile
+#
+# Beginning of multi stage Dockerfile
+################################################################################
 
-# This image re-bundles the Docker image from the upstream provider, Elastic.
-FROM docker.elastic.co/kibana/kibana:8.11.1@sha256:52bc81c818900f916e7ba37fe3056c9f5488606399b02335434733670378a89a
-# Supported Bashbrew Architectures: amd64 arm64v8
+################################################################################
+# Build stage 0 `builder`:
+# Extract Kibana artifact
+################################################################################
+FROM ubuntu:20.04 AS builder
 
-# The upstream image was built by:
-#   https://github.com/elastic/dockerfiles/tree/v8.11.1/kibana
+RUN apt-get update && DEBIAN_FRONTEND=noninteractive apt-get install -y curl
 
-# The build can be reproduced locally via:
-#   docker build 'https://github.com/elastic/dockerfiles.git#v8.11.1:kibana'
+RUN cd /tmp && \
+  curl --retry 8 -s -L \
+    --output kibana.tar.gz \
+     https://artifacts.elastic.co/downloads/kibana/kibana-8.11.1-linux-$(arch).tar.gz && \
+  cd -
 
-# For a full list of supported images and tags visit https://www.docker.elastic.co
 
-# For documentation visit https://www.elastic.co/guide/en/kibana/current/docker.html
+RUN mkdir /usr/share/kibana
+WORKDIR /usr/share/kibana
+RUN tar --strip-components=1 -zxf /tmp/kibana.tar.gz
+# Ensure that group permissions are the same as user permissions.
+# This will help when relying on GID-0 to run Kibana, rather than UID-1000.
+# OpenShift does this, for example.
+# REF: https://docs.openshift.org/latest/creating_images/guidelines.html
+RUN chmod -R g=u /usr/share/kibana
 
-# See https://github.com/docker-library/official-images/pull/4917 for more details.
+
+################################################################################
+# Build stage 1 (the actual Kibana image):
+#
+# Copy kibana from stage 0
+# Add entrypoint
+################################################################################
+FROM ubuntu:20.04
+EXPOSE 5601
+
+RUN for iter in {1..10}; do \
+      export DEBIAN_FRONTEND=noninteractive && \
+      apt-get update  && \
+      apt-get upgrade -y  && \
+      apt-get install -y --no-install-recommends \
+       fontconfig fonts-liberation libnss3 libfontconfig1 ca-certificates curl && \
+      apt-get clean && \
+      rm -rf /var/lib/apt/lists/* && exit_code=0 && break || exit_code=$? && echo "apt-get error: retry $iter in 10s" && \
+      sleep 10; \
+    done; \
+    (exit $exit_code)
+
+# Add an init process, check the checksum to make sure it's a match
+RUN set -e ; \
+    TINI_BIN="" ; \
+    case "$(arch)" in \
+        aarch64) \
+            TINI_BIN='tini-arm64' ; \
+            ;; \
+        x86_64) \
+            TINI_BIN='tini-amd64' ; \
+            ;; \
+        *) echo >&2 "Unsupported architecture $(arch)" ; exit 1 ;; \
+    esac ; \
+  TINI_VERSION='v0.19.0' ; \
+  curl --retry 8 -S -L -O "https://github.com/krallin/tini/releases/download/${TINI_VERSION}/${TINI_BIN}" ; \
+  curl --retry 8 -S -L -O "https://github.com/krallin/tini/releases/download/${TINI_VERSION}/${TINI_BIN}.sha256sum" ; \
+  sha256sum -c "${TINI_BIN}.sha256sum" ; \
+  rm "${TINI_BIN}.sha256sum" ; \
+  mv "${TINI_BIN}" /bin/tini ; \
+  chmod +x /bin/tini
+
+RUN mkdir /usr/share/fonts/local
+RUN curl --retry 8 -S -L -o /usr/share/fonts/local/NotoSansCJK-Regular.ttc https://github.com/googlefonts/noto-cjk/raw/NotoSansV2.001/NotoSansCJK-Regular.ttc
+RUN echo "5dcd1c336cc9344cb77c03a0cd8982ca8a7dc97d620fd6c9c434e02dcb1ceeb3  /usr/share/fonts/local/NotoSansCJK-Regular.ttc" | sha256sum -c -
+RUN fc-cache -v
+
+# Bring in Kibana from the initial stage.
+COPY --from=builder --chown=1000:0 /usr/share/kibana /usr/share/kibana
+WORKDIR /usr/share/kibana
+RUN ln -s /usr/share/kibana /opt/kibana
+
+ENV ELASTIC_CONTAINER true
+ENV PATH=/usr/share/kibana/bin:$PATH
+
+# Set some Kibana configuration defaults.
+COPY --chown=1000:0 config/kibana.yml /usr/share/kibana/config/kibana.yml
+
+# Add the launcher/wrapper script. It knows how to interpret environment
+# variables and translate them to Kibana CLI options.
+COPY bin/kibana-docker /usr/local/bin/
+
+# Ensure gid 0 write permissions for OpenShift.
+RUN chmod g+ws /usr/share/kibana && \
+    find /usr/share/kibana -gid 0 -and -not -perm /g+w -exec chmod g+w {} \;
+
+# Remove the suid bit everywhere to mitigate "Stack Clash"
+RUN find / -xdev -perm -4000 -exec chmod u-s {} +
+
+# Provide a non-root user to run the process.
+RUN groupadd --gid 1000 kibana && \
+    useradd --uid 1000 --gid 1000 -G 0 \
+      --home-dir /usr/share/kibana --no-create-home \
+      kibana
+
+LABEL org.label-schema.build-date="2023-11-10T21:05:44.206Z" \
+  org.label-schema.license="Elastic License" \
+  org.label-schema.name="Kibana" \
+  org.label-schema.schema-version="1.0" \
+  org.label-schema.url="https://www.elastic.co/products/kibana" \
+  org.label-schema.usage="https://www.elastic.co/guide/en/kibana/reference/index.html" \
+  org.label-schema.vcs-ref="09feaf416f986b239b8e8ad95ecdda0f9d56ebec" \
+  org.label-schema.vcs-url="https://github.com/elastic/kibana" \
+  org.label-schema.vendor="Elastic" \
+  org.label-schema.version="8.11.1" \
+  org.opencontainers.image.created="2023-11-10T21:05:44.206Z" \
+  org.opencontainers.image.documentation="https://www.elastic.co/guide/en/kibana/reference/index.html" \
+  org.opencontainers.image.licenses="Elastic License" \
+  org.opencontainers.image.revision="09feaf416f986b239b8e8ad95ecdda0f9d56ebec" \
+  org.opencontainers.image.source="https://github.com/elastic/kibana" \
+  org.opencontainers.image.title="Kibana" \
+  org.opencontainers.image.url="https://www.elastic.co/products/kibana" \
+  org.opencontainers.image.vendor="Elastic" \
+  org.opencontainers.image.version="8.11.1"
+
+
+ENTRYPOINT ["/bin/tini", "--"]
+
+
+CMD ["/usr/local/bin/kibana-docker"]
+
+
+USER kibana
diff --git a/kibana_8.11.1/bin/kibana-docker b/kibana_8.11.1/bin/kibana-docker
new file mode 100755
index 0000000..97ca840
--- /dev/null
+++ b/kibana_8.11.1/bin/kibana-docker
@@ -0,0 +1,454 @@
+#!/bin/bash
+#
+# ** THIS IS AN AUTO-GENERATED FILE **
+#
+
+# Run Kibana, using environment variables to set longopts defining Kibana's
+# configuration.
+#
+# eg. Setting the environment variable:
+#
+#       ELASTICSEARCH_LOGQUERIES=true
+#
+# will cause Kibana to be invoked with:
+#
+#       --elasticsearch.logQueries=true
+
+kibana_vars=(
+    apm_oss.apmAgentConfigurationIndex
+    apm_oss.errorIndices
+    apm_oss.indexPattern
+    apm_oss.metricsIndices
+    apm_oss.onboardingIndices
+    apm_oss.sourcemapIndices
+    apm_oss.spanIndices
+    apm_oss.transactionIndices
+    console.proxyConfig
+    console.proxyFilter
+    csp.strict
+    csp.warnLegacyBrowsers
+    csp.disableUnsafeEval
+    csp.script_src
+    csp.worker_src
+    csp.style_src
+    csp.connect_src
+    csp.default_src
+    csp.font_src
+    csp.frame_src
+    csp.img_src
+    csp.frame_ancestors
+    csp.report_uri
+    csp.report_to
+    data.autocomplete.valueSuggestions.terminateAfter
+    data.autocomplete.valueSuggestions.timeout
+    data.search.asyncSearch.waitForCompletion
+    data.search.asyncSearch.keepAlive
+    data.search.asyncSearch.batchedReduceSize
+    data.search.asyncSearch.pollInterval
+    data.search.sessions.defaultExpiration
+    data.search.sessions.enabled
+    data.search.sessions.maxUpdateRetries
+    data.search.sessions.notTouchedInProgressTimeout
+    data.search.sessions.notTouchedTimeout
+    data.search.sessions.pageSize
+    data.search.sessions.trackingInterval
+    unifiedSearch.autocomplete.valueSuggestions.terminateAfter
+    unifiedSearch.autocomplete.valueSuggestions.timeout
+    unifiedSearch.autocomplete.querySuggestions.enabled
+    unifiedSearch.autocomplete.valueSuggestions.enabled
+    unifiedSearch.autocomplete.valueSuggestions.tiers
+    elasticsearch.customHeaders
+    elasticsearch.hosts
+    elasticsearch.logQueries
+    elasticsearch.password
+    elasticsearch.pingTimeout
+    elasticsearch.requestHeadersWhitelist
+    elasticsearch.requestTimeout
+    elasticsearch.serviceAccountToken
+    elasticsearch.shardTimeout
+    elasticsearch.sniffInterval
+    elasticsearch.sniffOnConnectionFault
+    elasticsearch.sniffOnStart
+    elasticsearch.ssl.alwaysPresentCertificate
+    elasticsearch.ssl.certificate
+    elasticsearch.ssl.certificateAuthorities
+    elasticsearch.ssl.key
+    elasticsearch.ssl.keyPassphrase
+    elasticsearch.ssl.keystore.password
+    elasticsearch.ssl.keystore.path
+    elasticsearch.ssl.truststore.password
+    elasticsearch.ssl.truststore.path
+    elasticsearch.ssl.verificationMode
+    elasticsearch.username
+    enterpriseSearch.accessCheckTimeout
+    enterpriseSearch.accessCheckTimeoutWarning
+    enterpriseSearch.host
+    externalUrl.policy
+    i18n.locale
+    interactiveSetup.enabled
+    interactiveSetup.connectionCheck.interval
+    kibana.autocompleteTerminateAfter
+    kibana.autocompleteTimeout
+    kibana.index
+    logging.appenders
+    logging.appenders.console
+    logging.appenders.file
+    logging.loggers
+    logging.loggers.appenders
+    logging.loggers.level
+    logging.loggers.name
+    logging.root
+    logging.root.appenders
+    logging.root.level
+    map.emsUrl
+    map.includeElasticMapsService
+    map.tilemap.options.attribution
+    map.tilemap.options.maxZoom
+    map.tilemap.options.minZoom
+    map.tilemap.options.subdomains
+    map.tilemap.url
+    migrations.batchSize
+    migrations.maxBatchSizeBytes
+    migrations.pollInterval
+    migrations.retryAttempts
+    migrations.scrollDuration
+    migrations.skip
+    monitoring.cluster_alerts.email_notifications.email_address
+    monitoring.kibana.collection.enabled
+    monitoring.kibana.collection.interval
+    monitoring.ui.container.elasticsearch.enabled
+    monitoring.ui.container.logstash.enabled
+    monitoring.ui.elasticsearch.hosts
+    monitoring.ui.elasticsearch.logFetchCount
+    monitoring.ui.elasticsearch.password
+    monitoring.ui.elasticsearch.pingTimeout
+    monitoring.ui.elasticsearch.ssl.certificateAuthorities
+    monitoring.ui.elasticsearch.ssl.verificationMode
+    monitoring.ui.elasticsearch.username
+    monitoring.ui.enabled
+    monitoring.ui.logs.index
+    monitoring.ui.max_bucket_size
+    monitoring.ui.min_interval_seconds
+    newsfeed.enabled
+    node.roles
+    ops.cGroupOverrides.cpuAcctPath
+    ops.cGroupOverrides.cpuPath
+    ops.interval
+    path.data
+    pid.file
+    regionmap
+    savedObjects.maxImportExportSize
+    savedObjects.maxImportPayloadBytes
+    savedObjects.allowHttpApiAccess
+    security.showInsecureClusterWarning
+    server.basePath
+    server.compression.enabled
+    server.compression.referrerWhitelist
+    server.cors
+    server.cors.allowCredentials
+    server.cors.allowOrigin
+    server.cors.enabled
+    server.cors.origin
+    server.customResponseHeaders
+    server.defaultRoute
+    server.host
+    server.keepAliveTimeout
+    server.maxPayload
+    server.maxPayloadBytes
+    server.name
+    server.port
+    server.publicBaseUrl
+    server.requestId.allowFromAnyIp
+    server.requestId.ipAllowlist
+    server.rewriteBasePath
+    server.restrictInternalApis
+    server.securityResponseHeaders.disableEmbedding
+    server.securityResponseHeaders.permissionsPolicy
+    server.securityResponseHeaders.referrerPolicy
+    server.securityResponseHeaders.strictTransportSecurity
+    server.securityResponseHeaders.xContentTypeOptions
+    server.securityResponseHeaders.crossOriginOpenerPolicy
+    server.shutdownTimeout
+    server.socketTimeout
+    server.ssl.cert
+    server.ssl.certificate
+    server.ssl.certificateAuthorities
+    server.ssl.cipherSuites
+    server.ssl.clientAuthentication
+    server.ssl.enabled
+    server.ssl.key
+    server.ssl.keyPassphrase
+    server.ssl.keystore.password
+    server.ssl.keystore.path
+    server.ssl.redirectHttpFromPort
+    server.ssl.supportedProtocols
+    server.ssl.truststore.password
+    server.ssl.truststore.path
+    server.uuid
+    server.xsrf.allowlist
+    server.xsrf.disableProtection
+    status.allowAnonymous
+    status.v6ApiFormat
+    telemetry.allowChangingOptInStatus
+    telemetry.enabled
+    telemetry.hidePrivacyStatement
+    telemetry.optIn
+    telemetry.sendUsageTo
+    telemetry.sendUsageFrom
+    tilemap.options.attribution
+    tilemap.options.maxZoom
+    tilemap.options.minZoom
+    tilemap.options.subdomains
+    tilemap.url
+    vega.enableExternalUrls
+    vis_type_vega.enableExternalUrls
+    xpack.actions.allowedHosts
+    xpack.actions.customHostSettings
+    xpack.actions.email.domain_allowlist
+    xpack.actions.enableFooterInEmail
+    xpack.actions.enabledActionTypes
+    xpack.actions.maxResponseContentLength
+    xpack.actions.preconfigured
+    xpack.actions.preconfiguredAlertHistoryEsIndex
+    xpack.actions.proxyBypassHosts
+    xpack.actions.proxyHeaders
+    xpack.actions.proxyOnlyHosts
+    xpack.actions.proxyRejectUnauthorizedCertificates
+    xpack.actions.proxyUrl
+    xpack.actions.rejectUnauthorized
+    xpack.actions.responseTimeout
+    xpack.actions.ssl.proxyVerificationMode
+    xpack.actions.ssl.verificationMode
+    xpack.alerting.healthCheck.interval
+    xpack.alerting.invalidateApiKeysTask.interval
+    xpack.alerting.invalidateApiKeysTask.removalDelay
+    xpack.alerting.defaultRuleTaskTimeout
+    xpack.alerting.rules.run.timeout
+    xpack.alerting.rules.run.ruleTypeOverrides
+    xpack.alerting.cancelAlertsOnRuleTimeout
+    xpack.alerting.rules.minimumScheduleInterval.value
+    xpack.alerting.rules.minimumScheduleInterval.enforce
+    xpack.alerting.rules.run.actions.max
+    xpack.alerting.rules.run.alerts.max
+    xpack.alerting.rules.run.actions.connectorTypeOverrides
+    xpack.alerts.healthCheck.interval
+    xpack.alerts.invalidateApiKeysTask.interval
+    xpack.alerts.invalidateApiKeysTask.removalDelay
+    xpack.apm.indices.error
+    xpack.apm.indices.metric
+    xpack.apm.indices.onboarding
+    xpack.apm.indices.sourcemap
+    xpack.apm.indices.span
+    xpack.apm.indices.transaction
+    xpack.apm.maxServiceEnvironments
+    xpack.apm.searchAggregatedTransactions
+    xpack.apm.serviceMapEnabled
+    xpack.apm.serviceMapFingerprintBucketSize
+    xpack.apm.serviceMapFingerprintGlobalBucketSize
+    xpack.apm.ui.enabled
+    xpack.apm.ui.maxTraceItems
+    xpack.apm.ui.transactionGroupBucketSize
+    xpack.banners.backgroundColor
+    xpack.banners.disableSpaceBanners
+    xpack.banners.placement
+    xpack.banners.textColor
+    xpack.banners.textContent
+    xpack.cases.files.allowedMimeTypes
+    xpack.cases.files.maxSize
+    xpack.code.disk.thresholdEnabled
+    xpack.code.disk.watermarkLow
+    xpack.code.indexRepoFrequencyMs
+    xpack.code.lsp.verbose
+    xpack.code.maxWorkspace
+    xpack.code.security.enableGitCertCheck
+    xpack.code.security.gitHostWhitelist
+    xpack.code.security.gitProtocolWhitelist
+    xpack.code.ui.enabled
+    xpack.code.updateRepoFrequencyMs
+    xpack.code.verbose
+    xpack.data_enhanced.search.sessions.defaultExpiration
+    xpack.data_enhanced.search.sessions.enabled
+    xpack.data_enhanced.search.sessions.maxUpdateRetries
+    xpack.data_enhanced.search.sessions.notTouchedInProgressTimeout
+    xpack.data_enhanced.search.sessions.notTouchedTimeout
+    xpack.data_enhanced.search.sessions.pageSize
+    xpack.data_enhanced.search.sessions.trackingInterval
+    xpack.discoverEnhanced.actions.exploreDataInChart.enabled
+    xpack.discoverEnhanced.actions.exploreDataInContextMenu.enabled
+    xpack.encryptedSavedObjects.encryptionKey
+    xpack.encryptedSavedObjects.keyRotation.decryptionOnlyKeys
+    xpack.event_log.indexEntries
+    xpack.event_log.logEntries
+    xpack.fleet.agentPolicies
+    xpack.fleet.agents.elasticsearch.host
+    xpack.fleet.agents.elasticsearch.hosts
+    xpack.fleet.agents.enabled
+    xpack.fleet.agents.fleet_server.hosts
+    xpack.fleet.agents.kibana.host
+    xpack.fleet.agents.tlsCheckDisabled
+    xpack.fleet.packages
+    xpack.fleet.packageVerification.gpgKeyPath
+    xpack.fleet.registryProxyUrl
+    xpack.fleet.registryUrl
+    xpack.graph.canEditDrillDownUrls
+    xpack.graph.savePolicy
+    xpack.infra.query.partitionFactor
+    xpack.infra.query.partitionSize
+    xpack.infra.sources.default.fields.container
+    xpack.infra.sources.default.fields.host
+    xpack.infra.sources.default.fields.message
+    xpack.infra.sources.default.fields.pod
+    xpack.infra.sources.default.fields.tiebreaker
+    xpack.infra.sources.default.fields.timestamp
+    xpack.infra.sources.default.logAlias
+    xpack.infra.sources.default.metricAlias
+    xpack.ingestManager.fleet.tlsCheckDisabled
+    xpack.ingestManager.registryUrl
+    xpack.observability.annotations.index
+    xpack.observability.unsafe.alertDetails.metrics.enabled
+    xpack.observability.unsafe.alertDetails.logs.enabled
+    xpack.observability.unsafe.alertDetails.uptime.enabled
+    xpack.observability.unsafe.alertDetails.observability.enabled
+    xpack.observability.unsafe.thresholdRule.enabled
+    xpack.observability.compositeSlo.enabled
+    xpack.reporting.capture.browser.autoDownload
+    xpack.reporting.capture.browser.chromium.disableSandbox
+    xpack.reporting.capture.browser.chromium.inspect
+    xpack.reporting.capture.browser.chromium.maxScreenshotDimension
+    xpack.reporting.capture.browser.chromium.proxy.bypass
+    xpack.reporting.capture.browser.chromium.proxy.enabled
+    xpack.reporting.capture.browser.chromium.proxy.server
+    xpack.reporting.capture.browser.type
+    xpack.reporting.capture.concurrency
+    xpack.reporting.capture.loadDelay
+    xpack.reporting.capture.maxAttempts
+    xpack.reporting.capture.networkPolicy
+    xpack.reporting.capture.settleTime
+    xpack.reporting.capture.timeout
+    xpack.reporting.capture.timeouts.openUrl
+    xpack.reporting.capture.timeouts.openUrl
+    xpack.reporting.capture.timeouts.renderComplete
+    xpack.reporting.capture.timeouts.waitForElements
+    xpack.reporting.capture.viewport.height
+    xpack.reporting.capture.viewport.width
+    xpack.reporting.capture.zoom
+    xpack.reporting.csv.checkForFormulas
+    xpack.reporting.csv.enablePanelActionDownload
+    xpack.reporting.csv.escapeFormulaValues
+    xpack.reporting.csv.maxSizeBytes
+    xpack.reporting.csv.scroll.duration
+    xpack.reporting.csv.scroll.size
+    xpack.reporting.csv.useByteOrderMarkEncoding
+    xpack.reporting.enabled
+    xpack.reporting.encryptionKey
+    xpack.reporting.kibanaApp
+    xpack.reporting.kibanaServer.hostname
+    xpack.reporting.kibanaServer.port
+    xpack.reporting.kibanaServer.protocol
+    xpack.reporting.poll.jobCompletionNotifier.interval
+    xpack.reporting.poll.jobCompletionNotifier.intervalErrorMultiplier
+    xpack.reporting.poll.jobsRefresh.interval
+    xpack.reporting.poll.jobsRefresh.intervalErrorMultiplier
+    xpack.reporting.queue.indexInterval
+    xpack.reporting.queue.pollEnabled
+    xpack.reporting.queue.pollInterval
+    xpack.reporting.queue.pollIntervalErrorMultiplier
+    xpack.reporting.queue.timeout
+    xpack.reporting.roles.allow
+    xpack.reporting.roles.enabled
+    xpack.ruleRegistry.write.enabled
+    xpack.security.accessAgreement.message
+    xpack.security.audit.appender.fileName
+    xpack.security.audit.appender.layout.highlight
+    xpack.security.audit.appender.layout.pattern
+    xpack.security.audit.appender.layout.type
+    xpack.security.audit.appender.legacyLoggingConfig
+    xpack.security.audit.appender.policy.interval
+    xpack.security.audit.appender.policy.modulate
+    xpack.security.audit.appender.policy.size
+    xpack.security.audit.appender.policy.type
+    xpack.security.audit.appender.strategy.max
+    xpack.security.audit.appender.strategy.pattern
+    xpack.security.audit.appender.strategy.type
+    xpack.security.audit.appender.type
+    xpack.security.audit.enabled
+    xpack.security.audit.ignore_filters
+    xpack.security.authc.http.autoSchemesEnabled
+    xpack.security.authc.http.enabled
+    xpack.security.authc.http.schemes
+    xpack.security.authc.oidc.realm
+    xpack.security.authc.providers
+    xpack.security.authc.saml.maxRedirectURLSize
+    xpack.security.authc.saml.realm
+    xpack.security.authc.selector.enabled
+    xpack.security.cookieName
+    xpack.security.encryptionKey
+    xpack.security.loginAssistanceMessage
+    xpack.security.loginHelp
+    xpack.security.sameSiteCookies
+    xpack.security.secureCookies
+    xpack.security.session.cleanupInterval
+    xpack.security.session.concurrentSessions.maxSessions
+    xpack.security.session.idleTimeout
+    xpack.security.session.lifespan
+    xpack.security.sessionTimeout
+    xpack.security.showInsecureClusterWarning
+    xpack.securitySolution.alertMergeStrategy
+    xpack.securitySolution.alertIgnoreFields
+    xpack.securitySolution.maxExceptionsImportSize
+    xpack.securitySolution.maxRuleImportExportSize
+    xpack.securitySolution.maxRuleImportPayloadBytes
+    xpack.securitySolution.maxTimelineImportExportSize
+    xpack.securitySolution.maxTimelineImportPayloadBytes
+    xpack.securitySolution.packagerTaskInterval
+    xpack.securitySolution.prebuiltRulesPackageVersion
+    xpack.spaces.maxSpaces
+    xpack.task_manager.max_attempts
+    xpack.task_manager.max_workers
+    xpack.task_manager.monitored_aggregated_stats_refresh_rate
+    xpack.task_manager.monitored_stats_required_freshness
+    xpack.task_manager.monitored_stats_running_average_window
+    xpack.task_manager.monitored_stats_health_verbose_log.enabled
+    xpack.task_manager.monitored_stats_health_verbose_log.warn_delayed_task_start_in_seconds
+    xpack.task_manager.monitored_task_execution_thresholds
+    xpack.task_manager.poll_interval
+    xpack.task_manager.request_capacity
+    xpack.task_manager.version_conflict_threshold
+    xpack.task_manager.event_loop_delay.monitor
+    xpack.task_manager.event_loop_delay.warn_threshold
+    xpack.task_manager.worker_utilization_running_average_window
+    xpack.uptime.index
+    serverless
+)
+
+longopts=''
+for kibana_var in ${kibana_vars[*]}; do
+    # 'elasticsearch.hosts' -> 'ELASTICSEARCH_HOSTS'
+    env_var=$(echo ${kibana_var^^} | tr . _)
+
+    # Indirectly lookup env var values via the name of the var.
+    # REF: http://tldp.org/LDP/abs/html/bashver2.html#EX78
+    value=${!env_var}
+    if [[ -n $value ]]; then
+      longopt="--${kibana_var}=${value}"
+      longopts+=" ${longopt}"
+    fi
+done
+
+# Files created at run-time should be group-writable, for Openshift's sake.
+umask 0002
+
+# The virtual file /proc/self/cgroup should list the current cgroup
+# membership. For each hierarchy, you can follow the cgroup path from
+# this file to the cgroup filesystem (usually /sys/fs/cgroup/) and
+# introspect the statistics for the cgroup for the given
+# hierarchy. Alas, Docker breaks this by mounting the container
+# statistics at the root while leaving the cgroup paths as the actual
+# paths. Therefore, Kibana provides a mechanism to override
+# reading the cgroup path from /proc/self/cgroup and instead uses the
+# cgroup path defined the configuration properties
+# ops.cGroupOverrides.cpuPath and ops.cGroupOverrides.cpuAcctPath.
+# Therefore, we set this value here so that cgroup statistics are
+# available for the container this process will run in.
+
+exec /usr/share/kibana/bin/kibana --ops.cGroupOverrides.cpuPath=/ --ops.cGroupOverrides.cpuAcctPath=/ ${longopts} "$@"
diff --git a/kibana_8.11.1/config/kibana.yml b/kibana_8.11.1/config/kibana.yml
new file mode 100644
index 0000000..230ba1c
--- /dev/null
+++ b/kibana_8.11.1/config/kibana.yml
@@ -0,0 +1,9 @@
+#
+# ** THIS IS AN AUTO-GENERATED FILE **
+#
+
+# Default Kibana configuration for docker target
+server.host: "0.0.0.0"
+server.shutdownTimeout: "5s"
+elasticsearch.hosts: [ "http://elasticsearch:9200" ]
+monitoring.ui.container.elasticsearch.enabled: true
\ No newline at end of file
diff --git a/logstash_7.17.15/Dockerfile b/logstash_7.17.15/Dockerfile
index 1c68cab..ca1e63a 100644
--- a/logstash_7.17.15/Dockerfile
+++ b/logstash_7.17.15/Dockerfile
@@ -1,17 +1,76 @@
-# Logstash 7.17.15
+# This Dockerfile was generated from templates/Dockerfile.j2
+FROM ubuntu:20.04
 
-# This image re-bundles the Docker image from the upstream provider, Elastic.
-FROM docker.elastic.co/logstash/logstash:7.17.15@sha256:a7a5cb1ead281fc31404c70e413b1bc93a04ef33ca92a1de62a1528d7a10a7b4
-# Supported Bashbrew Architectures: amd64 arm64v8
+RUN for iter in {1..10}; do \
+export DEBIAN_FRONTEND=noninteractive && \
+apt-get update -y && \
+apt-get upgrade -y && \
+apt-get install -y procps findutils tar gzip curl && \
+apt-get install -y locales && \
+apt-get clean all && \
+locale-gen 'en_US.UTF-8' && \
+    apt-get clean metadata && \
+exit_code=0 && break || exit_code=$? && \
+    echo "packaging error: retry $iter in 10s" && \
+    apt-get clean all && \
+apt-get clean metadata && \
+sleep 10; done; \
+    (exit $exit_code)
 
-# The upstream image was built by:
-#   https://github.com/elastic/dockerfiles/tree/v7.17.15/logstash
+# Provide a non-root user to run the process.
+RUN groupadd --gid 1000 logstash && \
+    adduser --uid 1000 --gid 1000 \
+       --home /usr/share/logstash --no-create-home \
+      logstash
 
-# The build can be reproduced locally via:
-#   docker build 'https://github.com/elastic/dockerfiles.git#v7.17.15:logstash'
+# Add Logstash itself.
+RUN \
+curl -Lo - https://artifacts.elastic.co/downloads/logstash/logstash-7.17.15-linux-$(arch).tar.gz | \
+    tar zxf - -C /usr/share && \
+    mv /usr/share/logstash-7.17.15 /usr/share/logstash && \
+chown --recursive logstash:logstash /usr/share/logstash/ && \
+    chown -R logstash:root /usr/share/logstash && \
+    chmod -R g=u /usr/share/logstash && \
+    mkdir /licenses/ && \
+    mv /usr/share/logstash/NOTICE.TXT /licenses/NOTICE.TXT && \
+    mv /usr/share/logstash/LICENSE.txt /licenses/LICENSE.txt && \
+    find /usr/share/logstash -type d -exec chmod g+s {} \; && \
+    ln -s /usr/share/logstash /opt/logstash
 
-# For a full list of supported images and tags visit https://www.docker.elastic.co
+WORKDIR /usr/share/logstash
+ENV ELASTIC_CONTAINER true
+ENV PATH=/usr/share/logstash/bin:$PATH
 
-# For Logstash documentation visit https://www.elastic.co/guide/en/logstash/current/docker.html
+# Provide a minimal configuration, so that simple invocations will provide
+# a good experience.
+ADD config/pipelines.yml config/pipelines.yml
+ADD config/logstash-full.yml config/logstash.yml
+ADD config/log4j2.properties config/
+ADD pipeline/default.conf pipeline/logstash.conf
+RUN chown --recursive logstash:root config/ pipeline/
+# Ensure Logstash gets the correct locale by default.
+ENV LANG=en_US.UTF-8 LC_ALL=en_US.UTF-8
+ADD env2yaml/env2yaml /usr/local/bin/
+# Place the startup wrapper script.
+ADD bin/docker-entrypoint /usr/local/bin/
+RUN chmod 0755 /usr/local/bin/docker-entrypoint
 
-# See https://github.com/docker-library/official-images/pull/5039 for more details.
+USER 1000
+
+EXPOSE 9600 5044
+
+LABEL  org.label-schema.schema-version="1.0" \
+  org.label-schema.vendor="Elastic" \
+  org.opencontainers.image.vendor="Elastic" \
+  org.label-schema.name="logstash" \
+  org.opencontainers.image.title="logstash" \
+  org.label-schema.version="7.17.15" \
+  org.opencontainers.image.version="7.17.15" \
+  org.label-schema.url="https://www.elastic.co/products/logstash" \
+  org.label-schema.vcs-url="https://github.com/elastic/logstash" \
+  org.label-schema.license="Elastic License" \
+  org.opencontainers.image.licenses="Elastic License" \
+  org.opencontainers.image.description="Logstash is a free and open server-side data processing pipeline that ingests data from a multitude of sources, transforms it, and then sends it to your favorite 'stash.'" \
+  org.label-schema.build-date=2023-10-10T17:45:59+00:00 \
+org.opencontainers.image.created=2023-10-10T17:45:59+00:00
+ENTRYPOINT ["/usr/local/bin/docker-entrypoint"]
diff --git a/logstash_7.17.15/bin/docker-entrypoint b/logstash_7.17.15/bin/docker-entrypoint
new file mode 100755
index 0000000..19165f1
--- /dev/null
+++ b/logstash_7.17.15/bin/docker-entrypoint
@@ -0,0 +1,15 @@
+#!/bin/bash -e
+
+# Map environment variables to entries in logstash.yml.
+# Note that this will mutate logstash.yml in place if any such settings are found.
+# This may be undesirable, especially if logstash.yml is bind-mounted from the
+# host system.
+env2yaml /usr/share/logstash/config/logstash.yml
+
+export LS_JAVA_OPTS="-Dls.cgroup.cpuacct.path.override=/ -Dls.cgroup.cpu.path.override=/ $LS_JAVA_OPTS"
+
+if [[ -z $1 ]] || [[ ${1:0:1} == '-' ]] ; then
+  exec logstash "$@"
+else
+  exec "$@"
+fi
diff --git a/logstash_7.17.15/config/log4j2.properties b/logstash_7.17.15/config/log4j2.properties
new file mode 100644
index 0000000..663a015
--- /dev/null
+++ b/logstash_7.17.15/config/log4j2.properties
@@ -0,0 +1,16 @@
+status = error
+name = LogstashPropertiesConfig
+
+appender.console.type = Console
+appender.console.name = plain_console
+appender.console.layout.type = PatternLayout
+appender.console.layout.pattern = [%d{ISO8601}][%-5p][%-25c]%notEmpty{[%X{pipeline.id}]}%notEmpty{[%X{plugin.id}]} %m%n
+
+appender.json_console.type = Console
+appender.json_console.name = json_console
+appender.json_console.layout.type = JSONLayout
+appender.json_console.layout.compact = true
+appender.json_console.layout.eventEol = true
+
+rootLogger.level = ${sys:ls.log.level}
+rootLogger.appenderRef.console.ref = ${sys:ls.log.format}_console
diff --git a/logstash_7.17.15/config/logstash-full.yml b/logstash_7.17.15/config/logstash-full.yml
new file mode 100644
index 0000000..58e1a35
--- /dev/null
+++ b/logstash_7.17.15/config/logstash-full.yml
@@ -0,0 +1,2 @@
+http.host: "0.0.0.0"
+xpack.monitoring.elasticsearch.hosts: [ "http://elasticsearch:9200" ]
diff --git a/logstash_7.17.15/config/pipelines.yml b/logstash_7.17.15/config/pipelines.yml
new file mode 100644
index 0000000..aed22ce
--- /dev/null
+++ b/logstash_7.17.15/config/pipelines.yml
@@ -0,0 +1,6 @@
+# This file is where you define your pipelines. You can define multiple.
+# For more information on multiple pipelines, see the documentation:
+#   https://www.elastic.co/guide/en/logstash/current/multiple-pipelines.html
+
+- pipeline.id: main
+  path.config: "/usr/share/logstash/pipeline"
diff --git a/logstash_7.17.15/env2yaml/env2yaml b/logstash_7.17.15/env2yaml/env2yaml
new file mode 100755
index 0000000..6badf5d
Binary files /dev/null and b/logstash_7.17.15/env2yaml/env2yaml differ
diff --git a/logstash_7.17.15/pipeline/default.conf b/logstash_7.17.15/pipeline/default.conf
new file mode 100644
index 0000000..11ce14c
--- /dev/null
+++ b/logstash_7.17.15/pipeline/default.conf
@@ -0,0 +1,12 @@
+input {
+  beats {
+    port => 5044
+  }
+}
+
+output {
+  stdout {
+    codec => rubydebug
+  }
+}
+
diff --git a/logstash_8.11.1/Dockerfile b/logstash_8.11.1/Dockerfile
index 2854b46..ab5215d 100644
--- a/logstash_8.11.1/Dockerfile
+++ b/logstash_8.11.1/Dockerfile
@@ -1,17 +1,77 @@
-# Logstash 8.11.1
+# This Dockerfile was generated from templates/Dockerfile.j2
+FROM ubuntu:20.04
 
-# This image re-bundles the Docker image from the upstream provider, Elastic.
-FROM docker.elastic.co/logstash/logstash:8.11.1@sha256:c036aa60b8d73b75e8dfbf8eea92dd25894ecc1d44038e4fb89503ea9b8795dc
-# Supported Bashbrew Architectures: amd64 arm64v8
+RUN for iter in {1..10}; do \
+export DEBIAN_FRONTEND=noninteractive && \
+apt-get update -y && \
+apt-get upgrade -y && \
+apt-get install -y procps findutils tar gzip curl && \
+apt-get install -y locales && \
+apt-get clean all && \
+locale-gen 'en_US.UTF-8' && \
+    apt-get clean metadata && \
+exit_code=0 && break || exit_code=$? && \
+    echo "packaging error: retry $iter in 10s" && \
+    apt-get clean all && \
+apt-get clean metadata && \
+sleep 10; done; \
+    (exit $exit_code)
 
-# The upstream image was built by:
-#   https://github.com/elastic/dockerfiles/tree/v8.11.1/logstash
+# Provide a non-root user to run the process.
+RUN groupadd --gid 1000 logstash && \
+    adduser --uid 1000 --gid 1000 \
+       --home /usr/share/logstash --no-create-home \
+      logstash
 
-# The build can be reproduced locally via:
-#   docker build 'https://github.com/elastic/dockerfiles.git#v8.11.1:logstash'
+# Add Logstash itself.
+RUN \
+curl -Lo - https://artifacts.elastic.co/downloads/logstash/logstash-8.11.1-linux-$(arch).tar.gz | \
+    tar zxf - -C /usr/share && \
+    mv /usr/share/logstash-8.11.1 /usr/share/logstash && \
+chown --recursive logstash:logstash /usr/share/logstash/ && \
+    chown -R logstash:root /usr/share/logstash && \
+    chmod -R g=u /usr/share/logstash && \
+    mkdir /licenses/ && \
+    mv /usr/share/logstash/NOTICE.TXT /licenses/NOTICE.TXT && \
+    mv /usr/share/logstash/LICENSE.txt /licenses/LICENSE.txt && \
+find /usr/share/logstash -type d -exec chmod g+s {} \; && \
+ln -s /usr/share/logstash /opt/logstash
 
-# For a full list of supported images and tags visit https://www.docker.elastic.co
+WORKDIR /usr/share/logstash
+ENV ELASTIC_CONTAINER true
+ENV PATH=/usr/share/logstash/bin:$PATH
 
-# For Logstash documentation visit https://www.elastic.co/guide/en/logstash/current/docker.html
+# Provide a minimal configuration, so that simple invocations will provide
+# a good experience.
+COPY config/pipelines.yml config/pipelines.yml
+COPY config/logstash-full.yml config/logstash.yml
+COPY config/log4j2.properties config/
+COPY config/log4j2.file.properties config/
+COPY pipeline/default.conf pipeline/logstash.conf
+RUN chown --recursive logstash:root config/ pipeline/
+# Ensure Logstash gets the correct locale by default.
+ENV LANG=en_US.UTF-8 LC_ALL=en_US.UTF-8
+COPY env2yaml/env2yaml /usr/local/bin/
+# Place the startup wrapper script.
+COPY bin/docker-entrypoint /usr/local/bin/
+RUN chmod 0755 /usr/local/bin/docker-entrypoint
 
-# See https://github.com/docker-library/official-images/pull/5039 for more details.
+USER 1000
+
+EXPOSE 9600 5044
+
+LABEL  org.label-schema.schema-version="1.0" \
+  org.label-schema.vendor="Elastic" \
+  org.opencontainers.image.vendor="Elastic" \
+  org.label-schema.name="logstash" \
+  org.opencontainers.image.title="logstash" \
+  org.label-schema.version="8.11.1" \
+  org.opencontainers.image.version="8.11.1" \
+  org.label-schema.url="https://www.elastic.co/products/logstash" \
+  org.label-schema.vcs-url="https://github.com/elastic/logstash" \
+  org.label-schema.license="Elastic License" \
+  org.opencontainers.image.licenses="Elastic License" \
+  org.opencontainers.image.description="Logstash is a free and open server-side data processing pipeline that ingests data from a multitude of sources, transforms it, and then sends it to your favorite 'stash.'" \
+  org.label-schema.build-date=2023-11-11T08:49:05+00:00 \
+org.opencontainers.image.created=2023-11-11T08:49:05+00:00
+ENTRYPOINT ["/usr/local/bin/docker-entrypoint"]
diff --git a/logstash_8.11.1/bin/docker-entrypoint b/logstash_8.11.1/bin/docker-entrypoint
new file mode 100755
index 0000000..e2fd33c
--- /dev/null
+++ b/logstash_8.11.1/bin/docker-entrypoint
@@ -0,0 +1,31 @@
+#!/bin/bash -e
+
+# Map environment variables to entries in logstash.yml.
+# Note that this will mutate logstash.yml in place if any such settings are found.
+# This may be undesirable, especially if logstash.yml is bind-mounted from the
+# host system.
+env2yaml /usr/share/logstash/config/logstash.yml
+
+if [[ -n "$LOG_STYLE" ]]; then
+  case "$LOG_STYLE" in
+    console)
+      # This is the default. Nothing to do.
+      ;;
+    file)
+      # Overwrite the default config with the stack config. Do this as a
+      # copy, not a move, in case the container is restarted.
+      cp -f /usr/share/logstash/config/log4j2.file.properties /usr/share/logstash/config/log4j2.properties
+      ;;
+    *)
+      echo "ERROR: LOG_STYLE set to [$LOG_STYLE]. Expected [console] or [file]" >&2
+      exit 1 ;;
+  esac
+fi
+
+export LS_JAVA_OPTS="-Dls.cgroup.cpuacct.path.override=/ -Dls.cgroup.cpu.path.override=/ $LS_JAVA_OPTS"
+
+if [[ -z $1 ]] || [[ ${1:0:1} == '-' ]] ; then
+  exec logstash "$@"
+else
+  exec "$@"
+fi
diff --git a/logstash_8.11.1/config/log4j2.file.properties b/logstash_8.11.1/config/log4j2.file.properties
new file mode 100644
index 0000000..234b23d
--- /dev/null
+++ b/logstash_8.11.1/config/log4j2.file.properties
@@ -0,0 +1,147 @@
+status = error
+name = LogstashPropertiesConfig
+
+appender.console.type = Console
+appender.console.name = plain_console
+appender.console.layout.type = PatternLayout
+appender.console.layout.pattern = [%d{ISO8601}][%-5p][%-25c]%notEmpty{[%X{pipeline.id}]}%notEmpty{[%X{plugin.id}]} %m%n
+
+appender.json_console.type = Console
+appender.json_console.name = json_console
+appender.json_console.layout.type = JSONLayout
+appender.json_console.layout.compact = true
+appender.json_console.layout.eventEol = true
+
+appender.rolling.type = RollingFile
+appender.rolling.name = plain_rolling
+appender.rolling.fileName = ${sys:ls.logs}/logstash-plain.log
+appender.rolling.filePattern = ${sys:ls.logs}/logstash-plain-%d{yyyy-MM-dd}-%i.log.gz
+appender.rolling.policies.type = Policies
+appender.rolling.policies.time.type = TimeBasedTriggeringPolicy
+appender.rolling.policies.time.interval = 1
+appender.rolling.policies.time.modulate = true
+appender.rolling.layout.type = PatternLayout
+appender.rolling.layout.pattern = [%d{ISO8601}][%-5p][%-25c]%notEmpty{[%X{pipeline.id}]}%notEmpty{[%X{plugin.id}]} %m%n
+appender.rolling.policies.size.type = SizeBasedTriggeringPolicy
+appender.rolling.policies.size.size = 100MB
+appender.rolling.strategy.type = DefaultRolloverStrategy
+appender.rolling.strategy.max = 30
+appender.rolling.avoid_pipelined_filter.type = PipelineRoutingFilter
+
+appender.json_rolling.type = RollingFile
+appender.json_rolling.name = json_rolling
+appender.json_rolling.fileName = ${sys:ls.logs}/logstash-json.log
+appender.json_rolling.filePattern = ${sys:ls.logs}/logstash-json-%d{yyyy-MM-dd}-%i.log.gz
+appender.json_rolling.policies.type = Policies
+appender.json_rolling.policies.time.type = TimeBasedTriggeringPolicy
+appender.json_rolling.policies.time.interval = 1
+appender.json_rolling.policies.time.modulate = true
+appender.json_rolling.layout.type = JSONLayout
+appender.json_rolling.layout.compact = true
+appender.json_rolling.layout.eventEol = true
+appender.json_rolling.policies.size.type = SizeBasedTriggeringPolicy
+appender.json_rolling.policies.size.size = 100MB
+appender.json_rolling.strategy.type = DefaultRolloverStrategy
+appender.json_rolling.strategy.max = 30
+appender.json_rolling.avoid_pipelined_filter.type = PipelineRoutingFilter
+
+appender.routing.type = PipelineRouting
+appender.routing.name = pipeline_routing_appender
+appender.routing.pipeline.type = RollingFile
+appender.routing.pipeline.name = appender-${ctx:pipeline.id}
+appender.routing.pipeline.fileName = ${sys:ls.logs}/pipeline_${ctx:pipeline.id}.log
+appender.routing.pipeline.filePattern = ${sys:ls.logs}/pipeline_${ctx:pipeline.id}.%i.log.gz
+appender.routing.pipeline.layout.type = PatternLayout
+appender.routing.pipeline.layout.pattern = [%d{ISO8601}][%-5p][%-25c] %m%n
+appender.routing.pipeline.policy.type = SizeBasedTriggeringPolicy
+appender.routing.pipeline.policy.size = 100MB
+appender.routing.pipeline.strategy.type = DefaultRolloverStrategy
+appender.routing.pipeline.strategy.max = 30
+
+rootLogger.level = ${sys:ls.log.level}
+rootLogger.appenderRef.console.ref = ${sys:ls.log.format}_console
+rootLogger.appenderRef.rolling.ref = ${sys:ls.log.format}_rolling
+rootLogger.appenderRef.routing.ref = pipeline_routing_appender
+
+# Slowlog
+
+appender.console_slowlog.type = Console
+appender.console_slowlog.name = plain_console_slowlog
+appender.console_slowlog.layout.type = PatternLayout
+appender.console_slowlog.layout.pattern = [%d{ISO8601}][%-5p][%-25c] %m%n
+
+appender.json_console_slowlog.type = Console
+appender.json_console_slowlog.name = json_console_slowlog
+appender.json_console_slowlog.layout.type = JSONLayout
+appender.json_console_slowlog.layout.compact = true
+appender.json_console_slowlog.layout.eventEol = true
+
+appender.rolling_slowlog.type = RollingFile
+appender.rolling_slowlog.name = plain_rolling_slowlog
+appender.rolling_slowlog.fileName = ${sys:ls.logs}/logstash-slowlog-plain.log
+appender.rolling_slowlog.filePattern = ${sys:ls.logs}/logstash-slowlog-plain-%d{yyyy-MM-dd}-%i.log.gz
+appender.rolling_slowlog.policies.type = Policies
+appender.rolling_slowlog.policies.time.type = TimeBasedTriggeringPolicy
+appender.rolling_slowlog.policies.time.interval = 1
+appender.rolling_slowlog.policies.time.modulate = true
+appender.rolling_slowlog.layout.type = PatternLayout
+appender.rolling_slowlog.layout.pattern = [%d{ISO8601}][%-5p][%-25c] %m%n
+appender.rolling_slowlog.policies.size.type = SizeBasedTriggeringPolicy
+appender.rolling_slowlog.policies.size.size = 100MB
+appender.rolling_slowlog.strategy.type = DefaultRolloverStrategy
+appender.rolling_slowlog.strategy.max = 30
+
+appender.json_rolling_slowlog.type = RollingFile
+appender.json_rolling_slowlog.name = json_rolling_slowlog
+appender.json_rolling_slowlog.fileName = ${sys:ls.logs}/logstash-slowlog-json.log
+appender.json_rolling_slowlog.filePattern = ${sys:ls.logs}/logstash-slowlog-json-%d{yyyy-MM-dd}-%i.log.gz
+appender.json_rolling_slowlog.policies.type = Policies
+appender.json_rolling_slowlog.policies.time.type = TimeBasedTriggeringPolicy
+appender.json_rolling_slowlog.policies.time.interval = 1
+appender.json_rolling_slowlog.policies.time.modulate = true
+appender.json_rolling_slowlog.layout.type = JSONLayout
+appender.json_rolling_slowlog.layout.compact = true
+appender.json_rolling_slowlog.layout.eventEol = true
+appender.json_rolling_slowlog.policies.size.type = SizeBasedTriggeringPolicy
+appender.json_rolling_slowlog.policies.size.size = 100MB
+appender.json_rolling_slowlog.strategy.type = DefaultRolloverStrategy
+appender.json_rolling_slowlog.strategy.max = 30
+
+logger.slowlog.name = slowlog
+logger.slowlog.level = trace
+logger.slowlog.appenderRef.console_slowlog.ref = ${sys:ls.log.format}_console_slowlog
+logger.slowlog.appenderRef.rolling_slowlog.ref = ${sys:ls.log.format}_rolling_slowlog
+logger.slowlog.additivity = false
+
+logger.licensereader.name = logstash.licensechecker.licensereader
+logger.licensereader.level = error
+
+# Silence http-client by default
+logger.apache_http_client.name = org.apache.http
+logger.apache_http_client.level = fatal
+
+# Deprecation log
+appender.deprecation_rolling.type = RollingFile
+appender.deprecation_rolling.name = deprecation_plain_rolling
+appender.deprecation_rolling.fileName = ${sys:ls.logs}/logstash-deprecation.log
+appender.deprecation_rolling.filePattern = ${sys:ls.logs}/logstash-deprecation-%d{yyyy-MM-dd}-%i.log.gz
+appender.deprecation_rolling.policies.type = Policies
+appender.deprecation_rolling.policies.time.type = TimeBasedTriggeringPolicy
+appender.deprecation_rolling.policies.time.interval = 1
+appender.deprecation_rolling.policies.time.modulate = true
+appender.deprecation_rolling.layout.type = PatternLayout
+appender.deprecation_rolling.layout.pattern = [%d{ISO8601}][%-5p][%-25c]%notEmpty{[%X{pipeline.id}]}%notEmpty{[%X{plugin.id}]} %m%n
+appender.deprecation_rolling.policies.size.type = SizeBasedTriggeringPolicy
+appender.deprecation_rolling.policies.size.size = 100MB
+appender.deprecation_rolling.strategy.type = DefaultRolloverStrategy
+appender.deprecation_rolling.strategy.max = 30
+
+logger.deprecation.name = org.logstash.deprecation, deprecation
+logger.deprecation.level = WARN
+logger.deprecation.appenderRef.deprecation_rolling.ref = deprecation_plain_rolling
+logger.deprecation.additivity = false
+
+logger.deprecation_root.name = deprecation
+logger.deprecation_root.level = WARN
+logger.deprecation_root.appenderRef.deprecation_rolling.ref = deprecation_plain_rolling
+logger.deprecation_root.additivity = false
diff --git a/logstash_8.11.1/config/log4j2.properties b/logstash_8.11.1/config/log4j2.properties
new file mode 100644
index 0000000..663a015
--- /dev/null
+++ b/logstash_8.11.1/config/log4j2.properties
@@ -0,0 +1,16 @@
+status = error
+name = LogstashPropertiesConfig
+
+appender.console.type = Console
+appender.console.name = plain_console
+appender.console.layout.type = PatternLayout
+appender.console.layout.pattern = [%d{ISO8601}][%-5p][%-25c]%notEmpty{[%X{pipeline.id}]}%notEmpty{[%X{plugin.id}]} %m%n
+
+appender.json_console.type = Console
+appender.json_console.name = json_console
+appender.json_console.layout.type = JSONLayout
+appender.json_console.layout.compact = true
+appender.json_console.layout.eventEol = true
+
+rootLogger.level = ${sys:ls.log.level}
+rootLogger.appenderRef.console.ref = ${sys:ls.log.format}_console
diff --git a/logstash_8.11.1/config/logstash-full.yml b/logstash_8.11.1/config/logstash-full.yml
new file mode 100644
index 0000000..58e1a35
--- /dev/null
+++ b/logstash_8.11.1/config/logstash-full.yml
@@ -0,0 +1,2 @@
+http.host: "0.0.0.0"
+xpack.monitoring.elasticsearch.hosts: [ "http://elasticsearch:9200" ]
diff --git a/logstash_8.11.1/config/pipelines.yml b/logstash_8.11.1/config/pipelines.yml
new file mode 100644
index 0000000..aed22ce
--- /dev/null
+++ b/logstash_8.11.1/config/pipelines.yml
@@ -0,0 +1,6 @@
+# This file is where you define your pipelines. You can define multiple.
+# For more information on multiple pipelines, see the documentation:
+#   https://www.elastic.co/guide/en/logstash/current/multiple-pipelines.html
+
+- pipeline.id: main
+  path.config: "/usr/share/logstash/pipeline"
diff --git a/logstash_8.11.1/env2yaml/env2yaml b/logstash_8.11.1/env2yaml/env2yaml
new file mode 100755
index 0000000..6a2a9e0
Binary files /dev/null and b/logstash_8.11.1/env2yaml/env2yaml differ
diff --git a/logstash_8.11.1/pipeline/default.conf b/logstash_8.11.1/pipeline/default.conf
new file mode 100644
index 0000000..11ce14c
--- /dev/null
+++ b/logstash_8.11.1/pipeline/default.conf
@@ -0,0 +1,12 @@
+input {
+  beats {
+    port => 5044
+  }
+}
+
+output {
+  stdout {
+    codec => rubydebug
+  }
+}
+

Relevant Maintainers:

@tianon
Copy link
Member Author

tianon commented Dec 7, 2023

@mark-vieira, @watson, and @jsvd, would each of you please confirm you're OK with being the explicit maintainer of each of these images? 🙇 ❤️

(#15753 (comment))

https://github.com/docker-library/official-images#contributing-to-the-standard-library might be a useful link/start for you (in deciding and in the future) 👀

@mark-vieira
Copy link

@mark-vieira, @watson, and @jsvd, would each of you please confirm you're OK with being the explicit maintainer of each of these images? 🙇 ❤️

Thinking about this more, does the maintainer have to be a single user, or can it be a GitHub Team?

@tianon
Copy link
Member Author

tianon commented Dec 7, 2023

A GitHub team is OK, but we'll want at least one individual listed too because someone outside the org/team unfortunately can't @-mention a team on GitHub (so neither us nor our bot would be able to do so 😞).

@mark-vieira
Copy link

A GitHub team is OK, but we'll want at least one individual listed too because someone outside the org/team unfortunately can't @-mention a team on GitHub (so neither us nor our bot would be able to do so 😞).

Grr. Does the contact have to be a github alias? Could it be an email mailing list or something? Just trying to avoid the obvious problem of folks moving teams, leaving the org, etc.

@tianon
Copy link
Member Author

tianon commented Dec 7, 2023

Email is an optional component of the format in the file, but most of our pings happen on GitHub (like you can see in the bot comment above on this PR) 😞

What we typically do with these files for images we maintain is write a script (or a program) that generates the appropriate file content for making PRs to update the images -- you could write something like that which enumerates the GitHub team (and thus keeps it up to date / in sync with the actual team membership) 👀

(Pulling that thought thread a little is probably going to lead to some questions that might be answered by https://github.com/docker-library/faq#can-i-use-a-bot-to-make-my-image-update-prs so I'll just drop that here in case it does 😄)

@mark-vieira
Copy link

(Pulling that thought thread a little is probably going to lead to some questions that might be answered by https://github.com/docker-library/faq#can-i-use-a-bot-to-make-my-image-update-prs so I'll just drop that here in case it does 😄)

@alpar-t this might be something to consider going forward.

@mark-vieira
Copy link

Email is an optional component of the format in the file, but most of our pings happen on GitHub (like you can see in the bot comment above on this PR) 😞

In the interest of getting our arm images available sooner rather than later I'm good with assigning myself for now. @watson, and @jsvd, thoughts?

@watson
Copy link

watson commented Dec 12, 2023

I'm not the correct maintainer for Kibana. I'm currently reaching out internally to find the right person. However, if you want to merge this to unblock the ARM images, go ahead. We can always update this maintainer once I figure out who it is.

@watson
Copy link

watson commented Dec 12, 2023

After checking, I think I'm probably the best candidate to put down as the maintainer for Kibana after all 👍

@jsvd
Copy link

jsvd commented Dec 12, 2023

Sign me up. In the future I'd like for the first point of contact to be a mailing list and potentially a bot, to avoid the issues raised by @mark-vieira.

@tianon tianon merged commit 75aeabf into docker-library:master Dec 12, 2023
11 checks passed
@tianon tianon deleted the elk branch December 12, 2023 17:10
tianon added a commit to docker-library/oi-janky-groovy that referenced this pull request Dec 14, 2023
@tianon tianon mentioned this pull request Mar 26, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants