Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Use Kustomize to build elastic-agent manifests for both managed and standalone more #2104

Merged
merged 17 commits into from
Jan 17, 2023
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 2 additions & 0 deletions dev-tools/kubernetes/.gitignore
Original file line number Diff line number Diff line change
@@ -0,0 +1,2 @@
overlays/**/.env
*.yaml
31 changes: 31 additions & 0 deletions dev-tools/kubernetes/Taskfile.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,31 @@
# https://taskfile.dev

version: '3'

tasks:
default:
cmds:
- task: managed
# - task: standalone

build:
cmds:
# note: create overlay
- kustomize build overlays/elastic-agent-managed > elastic-agent-managed-kubernetes.yaml
- kustomize build overlays/elastic-agent-standalone > elastic-agent-standalone-kubernetes.yaml

managed:
cmds:
- kubectl apply -k overlays/elastic-agent-managed

managed-delete:
cmds:
- kubectl delete -k overlays/elastic-agent-managed

standalone:
cmds:
- kubectl apply -k overlays/elastic-agent-standalone

standalone-delete:
cmds:
- kubectl delete -k overlays/elastic-agent-standalone
11 changes: 11 additions & 0 deletions dev-tools/kubernetes/base/common/cluster-role-binding.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,11 @@
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: cluster-role-binding
subjects:
- kind: ServiceAccount
name: service-account
roleRef:
kind: ClusterRole
name: cluster-role
apiGroup: rbac.authorization.k8s.io
67 changes: 67 additions & 0 deletions dev-tools/kubernetes/base/common/cluster-role.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,67 @@
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: cluster-role
rules:
- apiGroups: [""]
resources:
- nodes
- namespaces
- events
- pods
- services
- configmaps
# Needed for cloudbeat
- serviceaccounts
- persistentvolumes
- persistentvolumeclaims
verbs: ["get", "list", "watch"]
# Enable this rule only if planing to use kubernetes_secrets provider
#- apiGroups: [""]
# resources:
# - secrets
# verbs: ["get"]
- apiGroups: ["extensions"]
resources:
- replicasets
verbs: ["get", "list", "watch"]
- apiGroups: ["apps"]
resources:
- statefulsets
- deployments
- replicasets
- daemonsets
verbs: ["get", "list", "watch"]
- apiGroups:
- ""
resources:
- nodes/stats
verbs:
- get
- apiGroups: [ "batch" ]
resources:
- jobs
- cronjobs
verbs: [ "get", "list", "watch" ]
# Needed for apiserver
- nonResourceURLs:
- "/metrics"
verbs:
- get
# Needed for cloudbeat
- apiGroups: ["rbac.authorization.k8s.io"]
resources:
- clusterrolebindings
- clusterroles
- rolebindings
- roles
verbs: ["get", "list", "watch"]
# Needed for cloudbeat
- apiGroups: ["policy"]
resources:
- podsecuritypolicies
verbs: ["get", "list", "watch"]
- apiGroups: [ "storage.k8s.io" ]
resources:
- storageclasses
verbs: [ "get", "list", "watch" ]
6 changes: 6 additions & 0 deletions dev-tools/kubernetes/base/common/kustomization.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,6 @@
resources:
- service-account.yaml
- role-leases.yaml
- cluster-role.yaml
- role-binding-leases.yaml
- cluster-role-binding.yaml
11 changes: 11 additions & 0 deletions dev-tools/kubernetes/base/common/role-binding-leases.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,11 @@
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: role-binding-leases
subjects:
- kind: ServiceAccount
name: service-account
roleRef:
kind: Role
name: role-leases
apiGroup: rbac.authorization.k8s.io
10 changes: 10 additions & 0 deletions dev-tools/kubernetes/base/common/role-leases.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,10 @@
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: role-leases
rules:
- apiGroups:
- coordination.k8s.io
resources:
- leases
verbs: ["get", "create", "update"]
4 changes: 4 additions & 0 deletions dev-tools/kubernetes/base/common/service-account.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,4 @@
apiVersion: v1
kind: ServiceAccount
metadata:
name: service-account
6 changes: 6 additions & 0 deletions dev-tools/kubernetes/base/elastic-agent-managed/.env
Original file line number Diff line number Diff line change
@@ -0,0 +1,6 @@
FLEET_URL=https://fleet-server:8220
FLEET_ENROLLMENT_TOKEN=token-id
FLEET_INSECURE=true
KIBANA_HOST=http://kibana:5601
KIBANA_FLEET_USERNAME=elastic
KIBANA_FLEET_PASSWORD=changeme
116 changes: 116 additions & 0 deletions dev-tools/kubernetes/base/elastic-agent-managed/daemonset.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,116 @@
# For more information https://www.elastic.co/guide/en/fleet/current/running-on-kubernetes-managed-by-fleet.html
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: daemonset
spec:
template:
spec:
# Tolerations are needed to run Elastic Agent on Kubernetes control-plane nodes.
# Agents running on control-plane nodes collect metrics from the control plane components (scheduler, controller manager) of Kubernetes
tolerations:
- key: node-role.kubernetes.io/control-plane
effect: NoSchedule
- key: node-role.kubernetes.io/master
effect: NoSchedule
serviceAccountName: service-account
hostNetwork: true
# 'hostPID: true' enables the Elastic Security integration to observe all process exec events on the host.
# Sharing the host process ID namespace gives visibility of all processes running on the same host.
hostPID: true
dnsPolicy: ClusterFirstWithHostNet
containers:
- name: elastic-agent
image: docker.elastic.co/beats/elastic-agent:8.1.0
env:
# Set to 1 for enrollment into Fleet server. If not set, Elastic Agent is run in standalone mode
- name: FLEET_ENROLL
value: "1"
- name: NODE_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
envFrom:
- configMapRef:
# Fleet Server URL to enroll the Elastic Agent into
# FLEET_URL can be found in Kibana, go to Management > Fleet > Settings
# - name: FLEET_URL
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just out of curiosity, why all these settings are commented out?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

they are mostly left there for documentation. We should address them (one way or another) in the next update about on this topic.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

But when those vars should be uncommented?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I have created this PR to fix this

# value: "https://fleet-server:8220"
# Elasticsearch API key used to enroll Elastic Agents in Fleet (https://www.elastic.co/guide/en/fleet/current/fleet-enrollment-tokens.html#fleet-enrollment-tokens)
# If FLEET_ENROLLMENT_TOKEN is empty then KIBANA_HOST, KIBANA_FLEET_USERNAME, KIBANA_FLEET_PASSWORD are needed
# - name: FLEET_ENROLLMENT_TOKEN
# value: "token-id"
# Set to true to communicate with Fleet with either insecure HTTP or unverified HTTPS
# - name: FLEET_INSECURE
# value: "true"
# - name: KIBANA_HOST
# value: "http://kibana:5601"
# # The basic authentication username used to connect to Kibana and retrieve a service_token to enable Fleet
# - name: KIBANA_FLEET_USERNAME
# value: "elastic"
# # The basic authentication password used to connect to Kibana and retrieve a service_token to enable Fleet
# - name: KIBANA_FLEET_PASSWORD
# value: "changeme"
name: configs
securityContext:
runAsUser: 0
resources:
limits:
memory: 500Mi
requests:
cpu: 100m
memory: 200Mi
volumeMounts:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think following part can also be splitted to multilple overalys.

We want to have needed mounts for observability and extra ones that imposed from security

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

do you mind elaborate a bit more on the topic. I vaguely recollect some suggestions from the security team but I wouldn't know how to change this part of the manifest

- name: proc
mountPath: /hostfs/proc
readOnly: true
- name: cgroup
mountPath: /hostfs/sys/fs/cgroup
readOnly: true
- name: varlibdockercontainers
mountPath: /var/lib/docker/containers
readOnly: true
- name: varlog
mountPath: /var/log
readOnly: true
- name: etc-full
mountPath: /hostfs/etc
readOnly: true
- name: var-lib
mountPath: /hostfs/var/lib
readOnly: true
- name: etc-mid
mountPath: /etc/machine-id
readOnly: true
volumes:
- name: proc
hostPath:
path: /proc
- name: cgroup
hostPath:
path: /sys/fs/cgroup
- name: varlibdockercontainers
hostPath:
path: /var/lib/docker/containers
- name: varlog
hostPath:
path: /var/log
# The following volumes are needed for Cloud Security Posture integration (cloudbeat)
# If you are not using this integration, then these volumes and the corresponding
# mounts can be removed.
- name: etc-full
hostPath:
path: /etc
- name: var-lib
hostPath:
path: /var/lib
# Mount /etc/machine-id from the host to determine host ID
# Needed for Elastic Security integration
- name: etc-mid
hostPath:
path: /etc/machine-id
type: File
Original file line number Diff line number Diff line change
@@ -0,0 +1,20 @@
namespace: kube-system
commonLabels:
k8s-app: elastic-agent
namePrefix: elastic-agent-managed-

images:
- name: docker.elastic.co/beats/elastic-agent
newTag: "8.6.0"

resources:
- ../../base/common
- daemonset.yaml

configMapGenerator:
- name: configs
envs:
- .env

generatorOptions:
disableNameSuffixHash: true
5 changes: 5 additions & 0 deletions dev-tools/kubernetes/base/elastic-agent-standalone/.env
Original file line number Diff line number Diff line change
@@ -0,0 +1,5 @@
ES_USERNAME=elastic
ES_PASSWORD=changeme
ES_HOST=https://elasticsearch:9200
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would suggest to put namespace as variable here

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think you can use a variable from an .env file in the kustomization.yml file (which is where I set the namespace for all the resources)

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we have as boolean on/off options the ability to install kube state metrics here?
By default to be on

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think so. This env file is only used to create a configmap and I don't think we can use an env variable from a configmap to decide if we want to add a manifest based on that. We could use an overlay with an extra resource to install or not the kube-state-metrics

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

But then we will need to edit the kustomize.yaml.

Maybe we can think that we can provide diffrent kustomzation files ?
eg kustomize-ksmenabled, kustomize-ksmdisabled?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yes, I believe we have to go that way. Different overlays for different behaviours

Copy link
Contributor

@gizas gizas Jan 16, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

1st Option:
A tool worth looking:
https://carvel.dev/ytt/
So we would like to provide some basic YAML templating and then kustomize installation.

2nd Option:
Maybe our new MAKEFILE that we can run and produce specific kustomize folders and run as oneliner command?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I heard about ytt but it seemed way too complicated compared to kustomize. Should we create another issue with the list of requirements before diving into ytt?

ES_SSL_VERIFICATION_MODE=full
ES_ALLOW_OLDER_VERSIONS=false
Loading