diff --git a/examples/k8s_audit_config/README.md b/examples/k8s_audit_config/README.md index 481a590f672..709ce2cac67 100644 --- a/examples/k8s_audit_config/README.md +++ b/examples/k8s_audit_config/README.md @@ -3,21 +3,36 @@ The files in this directory can be used to configure k8s audit logging. The relevant files are: * [audit-policy.yaml](./audit-policy.yaml): The k8s audit log configuration we used to create the rules in [k8s_audit_rules.yaml](../../rules/k8s_audit_rules.yaml). You may find it useful as a reference when creating your own K8s Audit Log configuration. -* [webhook-config.yaml](./webhook-config.yaml): A webhook configuration that sends audit events to localhost, port 8765. You may find it useful as a starting point when deciding how to route audit events to the embedded webserver within falco. +* [webhook-config.yaml.in](./webhook-config.yaml.in): A (templated) webhook configuration that sends audit events to an ip associated with the falco service, port 8765. It is templated in that the *actual* ip is defined in an environment variable `FALCO_SERVICE_CLUSTERIP`, which can be plugged in using a program like `envsubst`. You may find it useful as a starting point when deciding how to route audit events to the embedded webserver within falco. -This file is only needed when using Minikube, which doesn't currently +These files are only needed when using Minikube, which doesn't currently have the ability to provide an audit config/webhook config directly from the minikube commandline. See [this issue](https://github.com/kubernetes/minikube/issues/2741) for more details. * [apiserver-config.patch.sh](./apiserver-config.patch.sh): A script that changes the configuration file `/etc/kubernetes/manifests/kube-apiserver.yaml` to add necessary config options and mounts for the kube-apiserver container that runs within the minikube vm. -A way to use these files with minikube to enable audit logging would be to run the following commands, from this directory: +A way to use these files with minikube to run falco and enable audit logging would be the following: + +#### Start Minikube with Audit Logging Enabled + +Run the following to start minikube with Audit Logging Enabled: ``` minikube start --kubernetes-version v1.11.0 --mount --mount-string $PWD:/tmp/k8s_audit_config --feature-gates AdvancedAuditing=true +``` + +#### Create a Falco DaemonSet and Supporting Accounts/Services + +Follow the [K8s Using Daemonset](../../integrations/k8s-using-daemonset/README.md) instructions to create a falco service account, service, configmap, and daemonset. + +#### Configure Audit Logging with a Policy and Webhook + +Run the following commands to fill in the template file with the ClusterIP ip address you created with the `falco-service` service above, and configure audit logging to use a policy and webhook that directs the right events to the falco daemonset. Although services like `falco-service.default.svc.cluster.local` can not be resolved from the kube-apiserver container within the minikube vm (they're run as pods but not *really* a part of the cluster), the ClusterIPs associated with those services are routable. + +``` +FALCO_SERVICE_CLUSTERIP=$(kubectl get service falco-service -o=jsonpath={.spec.clusterIP}) envsubst < webhook-config.yaml.in > webhook-config.yaml ssh -i $(minikube ssh-key) docker@$(minikube ip) sudo bash /tmp/k8s_audit_config/apiserver-config.patch.sh -ssh -i $(minikube ssh-key) -R 8765:localhost:8765 docker@$(minikube ip) ``` -K8s audit events will then be sent to localhost on the host (not minikube vm) machine, port 8765. +K8s audit events will then be routed to the falco daemonset within the cluster, which you can observe via `kubectl logs -f $(kubectl get pods -l app=falco-example -o jsonpath={.items[0].metadata.name})`. diff --git a/examples/k8s_audit_config/webhook-config.yaml b/examples/k8s_audit_config/webhook-config.yaml.in similarity index 77% rename from examples/k8s_audit_config/webhook-config.yaml rename to examples/k8s_audit_config/webhook-config.yaml.in index f188dbdb5d5..3ace6a964bd 100644 --- a/examples/k8s_audit_config/webhook-config.yaml +++ b/examples/k8s_audit_config/webhook-config.yaml.in @@ -3,7 +3,7 @@ kind: Config clusters: - name: falco cluster: - server: http://127.0.0.1:8765/k8s_audit + server: http://$FALCO_SERVICE_CLUSTERIP:8765/k8s_audit contexts: - context: cluster: falco diff --git a/integrations/k8s-using-daemonset/README.md b/integrations/k8s-using-daemonset/README.md index e55fbd9ac29..e224fa7367c 100644 --- a/integrations/k8s-using-daemonset/README.md +++ b/integrations/k8s-using-daemonset/README.md @@ -4,7 +4,7 @@ This directory gives you the required YAML files to stand up Sysdig Falco on Kub The two options are provided to deploy a Daemon Set: - `k8s-with-rbac` - This directory provides a definition to deploy a Daemon Set on Kubernetes with RBAC enabled. -- `k8s-without-rbac` - This directory provides a definition to deploy a Daemon Set on Kubernetes without RBAC enabled. +- `k8s-without-rbac` - This directory provides a definition to deploy a Daemon Set on Kubernetes without RBAC enabled. **This method is deprecated in favor of RBAC-based installs, and won't be updated going forward.** Also provided: - `falco-event-generator-deployment.yaml` - A Kubernetes Deployment to generate sample events. This is useful for testing, but note it will generate a large number of events. @@ -21,11 +21,20 @@ clusterrolebinding "falco-cluster-role-binding" created k8s-using-daemonset$ ``` +We also create a service that allows other services to reach the embedded webserver in falco, which listens on https port 8765: + +``` +k8s-using-daemonset$ kubectl create -f k8s-with-rbac/falco-service.yaml +service/falco-service created +k8s-using-daemonset$ +``` + The Daemon Set also relies on a Kubernetes ConfigMap to store the Falco configuration and make the configuration available to the Falco Pods. This allows you to manage custom configuration without rebuilding and redeploying the underlying Pods. In order to create the ConfigMap you'll need to first need to copy the required configuration from their location in this GitHub repo to the `k8s-with-rbac/falco-config/` directory. Any modification of the configuration should be performed on these copies rather than the original files. ``` k8s-using-daemonset$ cp ../../falco.yaml k8s-with-rbac/falco-config/ k8s-using-daemonset$ cp ../../rules/falco_rules.* k8s-with-rbac/falco-config/ +k8s-using-daemonset$ cp ../../rules/k8s_audit_rules.yaml k8s-with-rbac/falco-config/ ``` If you want to send Falco alerts to a Slack channel, you'll want to modify the `falco.yaml` file to point to your Slack webhook. For more information on getting a webhook URL for your Slack team, refer to the [Slack documentation](https://api.slack.com/incoming-webhooks). Add the below to the bottom of the `falco.yaml` config file you just copied to enable Slack messages. @@ -54,7 +63,7 @@ k8s-using-daemonset$ ``` -## Deploying to Kubernetes without RBAC enabled +## Deploying to Kubernetes without RBAC enabled (**Deprecated**) If you are running Kubernetes with Legacy Authorization enabled, you can use `kubectl` to deploy the Daemon Set provided in the `k8s-without-rbac` directory. The example provides the ability to post messages to a Slack channel via a webhook. For more information on getting a webhook URL for your Slack team, refer to the [Slack documentation](https://api.slack.com/incoming-webhooks). Modify the [`args`](https://github.com/draios/falco/blob/dev/examples/k8s-using-daemonset/falco-daemonset.yaml#L21) passed to the Falco container to point to the appropriate URL for your webhook. diff --git a/integrations/k8s-using-daemonset/k8s-with-rbac/falco-account.yaml b/integrations/k8s-using-daemonset/k8s-with-rbac/falco-account.yaml index 9d611519522..b3968a79e34 100644 --- a/integrations/k8s-using-daemonset/k8s-with-rbac/falco-account.yaml +++ b/integrations/k8s-using-daemonset/k8s-with-rbac/falco-account.yaml @@ -2,11 +2,17 @@ apiVersion: v1 kind: ServiceAccount metadata: name: falco-account + labels: + app: falco-example + role: security --- kind: ClusterRole apiVersion: rbac.authorization.k8s.io/v1beta1 metadata: name: falco-cluster-role + labels: + app: falco-example + role: security rules: - apiGroups: ["extensions",""] resources: ["nodes","namespaces","pods","replicationcontrollers","services","events","configmaps"] @@ -19,6 +25,9 @@ apiVersion: rbac.authorization.k8s.io/v1beta1 metadata: name: falco-cluster-role-binding namespace: default + labels: + app: falco-example + role: security subjects: - kind: ServiceAccount name: falco-account diff --git a/integrations/k8s-using-daemonset/k8s-with-rbac/falco-daemonset-configmap.yaml b/integrations/k8s-using-daemonset/k8s-with-rbac/falco-daemonset-configmap.yaml index 406b7892649..b88a8fe56b1 100644 --- a/integrations/k8s-using-daemonset/k8s-with-rbac/falco-daemonset-configmap.yaml +++ b/integrations/k8s-using-daemonset/k8s-with-rbac/falco-daemonset-configmap.yaml @@ -1,16 +1,15 @@ apiVersion: extensions/v1beta1 kind: DaemonSet metadata: - name: falco + name: falco-daemonset labels: - name: falco-daemonset - app: demo + app: falco-example + role: security spec: template: metadata: labels: - name: falco - app: demo + app: falco-example role: security spec: serviceAccount: falco-account diff --git a/integrations/k8s-using-daemonset/k8s-with-rbac/falco-service.yaml b/integrations/k8s-using-daemonset/k8s-with-rbac/falco-service.yaml new file mode 100644 index 00000000000..3ed22658de5 --- /dev/null +++ b/integrations/k8s-using-daemonset/k8s-with-rbac/falco-service.yaml @@ -0,0 +1,13 @@ +kind: Service +apiVersion: v1 +metadata: + name: falco-service + labels: + app: falco-example + role: security +spec: + selector: + app: falco-example + ports: + - protocol: TCP + port: 8765