Skip to content

Commit

Permalink
Update K8s docs to include OpenShift config info. (elastic#8300)
Browse files Browse the repository at this point in the history
* Update K8s docs to include OpenShift config info.

* Add changes from the review

* Add another fix from review

* Update correct yaml files and run make update

* Update permissions
  • Loading branch information
dedemorton authored Oct 5, 2018
1 parent 41f5680 commit 3487b6c
Show file tree
Hide file tree
Showing 7 changed files with 171 additions and 44 deletions.
2 changes: 2 additions & 0 deletions deploy/kubernetes/filebeat-kubernetes.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -89,6 +89,8 @@ spec:
value:
securityContext:
runAsUser: 0
# If using Red Hat OpenShift uncomment this:
#privileged: true
resources:
limits:
memory: 200Mi
Expand Down
2 changes: 2 additions & 0 deletions deploy/kubernetes/filebeat/filebeat-daemonset.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -35,6 +35,8 @@ spec:
value:
securityContext:
runAsUser: 0
# If using Red Hat OpenShift uncomment this:
#privileged: true
resources:
limits:
memory: 200Mi
Expand Down
12 changes: 12 additions & 0 deletions deploy/kubernetes/metricbeat-kubernetes.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -77,6 +77,12 @@ data:
period: 10s
host: ${NODE_NAME}
hosts: ["localhost:10255"]
# If using Red Hat OpenShift remove the previous hosts entry and
# uncomment these settings:
#hosts: ["https://${HOSTNAME}:10250"]
#bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
#ssl.certificate_authorities:
#- /var/run/secrets/kubernetes.io/serviceaccount/service-ca.crt
---
# Deploy a Metricbeat instance per node for node metrics retrieval
apiVersion: extensions/v1beta1
Expand Down Expand Up @@ -320,6 +326,12 @@ rules:
- statefulsets
- deployments
verbs: ["get", "list", "watch"]
- apiGroups:
- ""
resources:
- nodes/stats
verbs:
- get
---
apiVersion: v1
kind: ServiceAccount
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -77,3 +77,9 @@ data:
period: 10s
host: ${NODE_NAME}
hosts: ["localhost:10255"]
# If using Red Hat OpenShift remove the previous hosts entry and
# uncomment these settings:
#hosts: ["https://${HOSTNAME}:10250"]
#bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
#ssl.certificate_authorities:
#- /var/run/secrets/kubernetes.io/serviceaccount/service-ca.crt
6 changes: 6 additions & 0 deletions deploy/kubernetes/metricbeat/metricbeat-role.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -21,3 +21,9 @@ rules:
- statefulsets
- deployments
verbs: ["get", "list", "watch"]
- apiGroups:
- ""
resources:
- nodes/stats
verbs:
- get
80 changes: 59 additions & 21 deletions filebeat/docs/running-on-kubernetes.asciidoc
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
[[running-on-kubernetes]]
=== Running Filebeat on Kubernetes
=== Running {beatname_uc} on Kubernetes

Filebeat <<running-on-docker,Docker images>> can be used on Kubernetes to
You can use {beatname_uc} <<running-on-docker,Docker images>> on Kubernetes to
retrieve and ship container logs.

ifeval::["{release-state}"=="unreleased"]
Expand All @@ -15,17 +15,17 @@ endif::[]
[float]
==== Kubernetes deploy manifests

By deploying Filebeat as a https://kubernetes.io/docs/concepts/workloads/controllers/daemonset/[DaemonSet]
we ensure we get a running instance on each node of the cluster.
You deploy {beatname_uc} as a https://kubernetes.io/docs/concepts/workloads/controllers/daemonset/[DaemonSet]
to ensure there's a running instance on each node of the cluster.

Docker logs host folder (`/var/lib/docker/containers`) is mounted on the Filebeat
container. Filebeat will start an input for these files and start harvesting
them as they appear.
The Docker logs host folder (`/var/lib/docker/containers`) is mounted on the
{beatname_uc} container. {beatname_uc} starts an input for the files and
begins harvesting them as soon as they appear in the folder.

Everything is deployed under `kube-system` namespace, you can change that by
updating the YAML file.
Everything is deployed under the `kube-system` namespace by default. To change
the namespace, modify the manifest file.

To get the manifests just run:
To download the manifest file, run:

["source", "sh", subs="attributes"]
------------------------------------------------
Expand All @@ -34,19 +34,19 @@ curl -L -O https://raw.githubusercontent.com/elastic/beats/{doc-branch}/deploy/k

[WARNING]
=======================================
If you are using Kubernetes 1.7 or earlier: {beatname_uc} uses a hostPath volume to persist internal data, it's located
under /var/lib/{beatname_lc}-data. The manifest uses folder autocreation (`DirectoryOrCreate`), which was introduced in
Kubernetes 1.8. You will need to remove `type: DirectoryOrCreate` from the manifest and create the host folder yourself.
*If you are using Kubernetes 1.7 or earlier:* {beatname_uc} uses a hostPath volume to persist internal data. It's located
under +/var/lib/{beatname_lc}-data+. The manifest uses folder autocreation (`DirectoryOrCreate`), which was introduced in
Kubernetes 1.8. You need to remove `type: DirectoryOrCreate` from the manifest and create the host folder yourself.
=======================================

[float]
==== Settings

Some parameters are exposed in the manifest to configure logs destination, by
default they will use an existing Elasticsearch deploy if it's present, but you
may want to change that behavior, so just edit the YAML file and modify them:
By default, {beatname_uc} sends events to an existing Elasticsearch deployment,
if present. To specify a different destination, change the following parameters
in the manifest file:

["source", "yaml", subs="attributes"]
[source,yaml]
------------------------------------------------
- name: ELASTICSEARCH_HOST
value: elasticsearch
Expand All @@ -58,17 +58,55 @@ may want to change that behavior, so just edit the YAML file and modify them:
value: changeme
------------------------------------------------

[float]
===== Red Hat OpenShift configuration

If you are using Red Hat OpenShift, you need to specify additional settings in
the manifest file and enable the container to run as privileged.

. Modify the `DaemonSet` container spec in the manifest file:
+
[source,yaml]
-----
securityContext:
runAsUser: 0
privileged: true
-----

. Grant the `filebeat` service account access to the privileged SCC:
+
[source,shell]
-----
oc adm policy add-scc-to-user privileged system:serviceaccount:kube-system:filebeat
-----
+
This command enables the container to be privileged as an administrator for
OpenShift.

. Override the default node selector for the `kube-system` namespace (or your
custom namespace) to allow for scheduling on any node:
+
[source,shell]
----
oc patch namespace kube-system -p \
'{"metadata": {"annotations": {"openshift.io/node-selector": ""}}}'
----
+
This command sets the node selector for the project to an empty string. If you
don't run this command, the default node selector will skip master nodes.


[float]
==== Deploy

To deploy Filebeat to Kubernetes just run:
To deploy {beatname_uc} to Kubernetes, run:

["source", "sh", subs="attributes"]
------------------------------------------------
kubectl create -f filebeat-kubernetes.yaml
------------------------------------------------

Then you should be able to check the status by running:
To check the status, run:

["source", "sh", subs="attributes"]
------------------------------------------------
Expand All @@ -78,5 +116,5 @@ NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE-SELECTOR
filebeat 32 32 0 32 0 <none> 1m
------------------------------------------------

Logs should start flowing to Elasticsearch, all annotated with <<add-kubernetes-metadata>>
processor.
Log events should start flowing to Elasticsearch. The events are annotated with
metadata added by the <<add-kubernetes-metadata>> processor.
107 changes: 84 additions & 23 deletions metricbeat/docs/running-on-kubernetes.asciidoc
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
[[running-on-kubernetes]]
=== Running Metricbeat on Kubernetes

Metricbeat <<running-on-docker,Docker images>> can be used on Kubernetes to
You can use {beatname_uc} <<running-on-docker,Docker images>> on Kubernetes to
retrieve cluster metrics.

ifeval::["{release-state}"=="unreleased"]
Expand All @@ -15,21 +15,23 @@ endif::[]
[float]
==== Kubernetes deploy manifests

Metricbeat is deployed in two different ways at the same time:
You deploy {beatname_uc} in two different ways at the same time:

By deploying Metricbeat as a https://kubernetes.io/docs/concepts/workloads/controllers/daemonset/[DaemonSet]
we ensure we get a running instance on each node of the cluster. It will be used
to retrieve most metrics from the host, like system metrics, Docker stats and
metrics from all the services running on top of Kubernetes.
* As a https://kubernetes.io/docs/concepts/workloads/controllers/daemonset/[DaemonSet]
to ensure that there's a running instance on each node of the cluster. These
instances are used to retrieve most metrics from the host, such as system
metrics, Docker stats, and metrics from all the services running on top of
Kubernetes.

A single Metricbeat instance is also created using a https://kubernetes.io/docs/concepts/workloads/controllers/Deployment/[Deployment].
It will retrieve metrics that are unique for the whole cluster, like
Kubernetes events or https://github.com/kubernetes/kube-state-metrics[kube-state-metrics].
* As a single {beatname_uc} instance created using a https://kubernetes.io/docs/concepts/workloads/controllers/Deployment/[Deployment].
This instance is used to retrieve metrics that are unique for the whole
cluster, such as Kubernetes events or
https://github.com/kubernetes/kube-state-metrics[kube-state-metrics].

Everything is deployed under `kube-system` namespace, you can change that by
updating the YAML file.
Everything is deployed under the `kube-system` namespace by default. To change
the namespace, modify the manifest file.

To get the manifests just run:
To download the manifest file, run:

["source", "sh", subs="attributes"]
------------------------------------------------
Expand All @@ -38,19 +40,19 @@ curl -L -O https://raw.githubusercontent.com/elastic/beats/{doc-branch}/deploy/k

[WARNING]
=======================================
If you are using Kubernetes 1.7 or earlier: {beatname_uc} uses a hostPath volume to persist internal data, it's located
under /var/lib/{beatname_lc}-data. The manifest uses folder autocreation (`DirectoryOrCreate`), which was introduced in
Kubernetes 1.8. You will need to remove `type: DirectoryOrCreate` from the manifest and create the host folder yourself.
*If you are using Kubernetes 1.7 or earlier:* {beatname_uc} uses a hostPath volume to persist internal data. It's located
under +/var/lib/{beatname_lc}-data+. The manifest uses folder autocreation (`DirectoryOrCreate`), which was introduced in
Kubernetes 1.8. You need to remove `type: DirectoryOrCreate` from the manifest and create the host folder yourself.
=======================================

[float]
==== Settings

Some parameters are exposed in the manifest to configure logs destination, by
default they will use an existing Elasticsearch deploy if it's present, but you
may want to change that behavior, so just edit the YAML file and modify them:
By default, {beatname_uc} sends events to an existing Elasticsearch deployment,
if present. To specify a different destination, change the following parameters
in the manifest file:

["source", "yaml", subs="attributes"]
[source,yaml]
------------------------------------------------
- name: ELASTICSEARCH_HOST
value: elasticsearch
Expand All @@ -62,20 +64,79 @@ may want to change that behavior, so just edit the YAML file and modify them:
value: changeme
------------------------------------------------

[float]
===== Red Hat OpenShift configuration

If you are using Red Hat OpenShift, you need to specify additional settings in
the manifest file and enable the container to run as privileged.

. In the manifest file, edit the `metricbeat-daemonset-modules` ConfigMap, and
specify the following settings under `kubernetes.yml` in the `data` section:
+
[source,yaml]
-----
kubernetes.yml: |-
- module: kubernetes
metricsets:
- node
- system
- pod
- container
- volume
period: 10s
host: ${NODE_NAME}
hosts: ["https://${HOSTNAME}:10250"]
bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
ssl.certificate_authorities:
- /var/run/secrets/kubernetes.io/serviceaccount/service-ca.crt
-----

. Under the `metricbeat` ClusterRole, add the following resources:
+
[source,yaml]
-----
- nodes/metrics
- nodes/stats
-----

. Grant the `metricbeat` service account access to the privileged SCC:
+
[source,shell]
-----
oc adm policy add-scc-to-user privileged system:serviceaccount:kube-system:metricbeat
-----
+
This command enables the container to be privileged as an administrator for
OpenShift.

. Override the default node selector for the `kube-system` namespace (or your
custom namespace) to allow for scheduling on any node:
+
[source,shell]
----
oc patch namespace kube-system -p \
'{"metadata": {"annotations": {"openshift.io/node-selector": ""}}}'
----
+
This command sets the node selector for the project to an empty string. If you
don't run this command, the default node selector will skip master nodes.

[float]
==== Deploy

Metricbeat gets some metrics from https://github.com/kubernetes/kube-state-metrics#usage[kube-state-metrics],
you will need to deploy it if it's not already running.
Metricbeat gets some metrics from https://github.com/kubernetes/kube-state-metrics#usage[kube-state-metrics].
If `kube-state-metrics` is not already running, deploy it now (see the
https://github.com/kubernetes/kube-state-metrics#kubernetes-deployment[Kubernetes
deployment] docs).

To deploy Metricbeat to Kubernetes just run:
To deploy {beatname_uc} to Kubernetes, run:

["source", "sh", subs="attributes"]
------------------------------------------------
kubectl create -f metricbeat-kubernetes.yaml
------------------------------------------------

Then you should be able to check the status by running:
To check the status, run:

["source", "sh", subs="attributes"]
------------------------------------------------
Expand Down

0 comments on commit 3487b6c

Please sign in to comment.