- For an overview of the application architecture see the design canvas
- To see the attacks covered see the edge definitions
- To contribute a new attack to the project follow the contribution guidelines
- Golang
>= 1.20
: https://go.dev/doc/install - Docker
>= 19.03
: https://docs.docker.com/engine/install/ - Docker Compose
V2
: https://docs.docker.com/compose/compose-file/compose-versioning/
- Kind: https://kind.sigs.k8s.io/docs/user/quick-start/#installing-with-a-package-manager
- Kubectl: https://kubernetes.io/docs/tasks/tools/
Release binaries are available for Linux / Windows / Mac OS via the releases page. These provide access to core KubeHound functionality but lack support for the make
commands detailed in subsequent sections. Once the release archive is downloaded and extracted start the backend via:
./kubehound.sh backend-up
NOTE:
- If downloading the releases via a browser you must run e.g
xattr -d com.apple.quarantine KubeHound_Darwin_arm64.tar.gz
before running to prevent MacOS blocking execution
Next choose a target Kubernetes cluster, either:
- Select the targeted cluster via
kubectx
(need to be installed separately) - Use a specific kubeconfig file by exporting the env variable:
export KUBECONFIG=/your/path/to/.kube/config
Finally run the compiled binary with packaged configuration (config.yaml
):
./kubehound.sh run
Clone this repository via git:
git clone https://github.com/DataDog/KubeHound.git
KubeHound ships with a sensible default configuration designed to get new users up and running quickly. The first step is to prepare the application:
cd KubeHound
make kubehound
This will do the following:
- Start the backend services via docker compose (wiping any existing data)
- Compile the kubehound binary from source
Next choose a target Kubernetes cluster, either:
- Select the targeted cluster via
kubectx
(need to be installed separately) - Use a specific kubeconfig file by exporting the env variable:
export KUBECONFIG=/your/path/to/.kube/config
Finally run the compiled binary with default configuration:
bin/kubehound
To view the generated graph see the Using KubeHound Data section.
To view a sample graph demonstrating attacks in a very, very vulnerable cluster you can generate data via running the app against the provided kind cluster:
make sample-graph
To view the generated graph see the Using KubeHound Data section.
First create and populate a .env file with the required variables:
cp deployments/kubehound/.env.tpl deployments/kubehound/.env
Edit the variables (datadog env DD_*
related and KUBEHOUND_ENV
):
KUBEHOUND_ENV
:dev
orrelease
DD_API_KEY
: api key you created from https://app.datadoghq.com/ website
Note:
KUBEHOUND_ENV=dev
will build the images locally (and provide some local debugging containers e.gmongo-express
)KUBEHOUND_ENV=release
will use prebuilt images from ghcr.io
To replicate the automated command and run KubeHound step-by-step. First build the application:
make build
Next spawn the backend infrastructure
make backend-up
Next create a configuration file:
collector:
type: live-k8s-api-collector
telemetry:
enabled: true
A tailored sample configuration file can be found here, a full configuration reference containing all possible parameters here.
Finally run the KubeHound binary, passing in the desired configuration:
bin/kubehound -c <config path>
Remember the targeted cluster must be set via kubectx
or setting the KUBECONFIG
environment variable. Additional functionality for managing the application can be found via:
make help
To query the KubeHound graph data requires using the Gremlin query language via an API call or dedicated graph query UI. A number of graph query UIs are availble, but we recommend gdotv. To access the KubeHound graph using gdotv
:
- Download and install the application from https://gdotv.com/
- Create a connection to the local janusgraph instance by following the steps here https://docs.gdotv.com/connection-management/ and using
hostname=localhost
- Navigate to the query editor and enter a sample query e.g
g.V().count()
. See detailed instructions here: https://docs.gdotv.com/query-editor/#run-your-query
We have documented a few sample queries to execute on the database in our documentation.
You can query the database data in your python script by using the following snippet:
#!/usr/bin/env python
import sys
from gremlin_python.driver.client import Client
KH_QUERY = "kh.containers().count()"
c = Client("ws://127.0.0.1:8182/gremlin", "kh")
results = c.submit(KH_QUERY).all().result()
You'll need to install gremlinpython
as a dependency via: pip install gremlinpython
Build the application via:
make build
All binaries will be output to the bin folder
Build the release packages locally using goreleaser:
make local-release
The full suite of unit tests can be run locally via:
make test
The repository includes a suite of system tests that will do the following:
- create a local kubernetes cluster
- collect kubernetes API data from the cluster
- run KubeHound using the file collector to create a working graph database
- query the graph database to ensure all expected vertices and edges have been created correctly
The cluster setup and running instances can be found under test/setup
If you need to manually access the system test environement with kubectl and other commands, you'll need to set (assuming you are at the root dir):
cd test/setup/ && export KUBECONFIG=$(pwd)/.kube-config
DD_API_KEY
(optional): set to the datadog API key used to submit metrics and other observability data.
Setup the test kind cluster (you only need to do this once!) via:
make local-cluster-deploy
Then run the system tests via:
make system-test
To cleanup the environment you can destroy the cluster via:
make local-cluster-destroy
To list all the available commands, run:
make help
Note: if you are running on Linux but you dont want to run sudo
for kind
and docker
command, you can overwrite this behavior by editing the following var in test/setup/.config
:
DOCKER_CMD="docker"
for docker commandKIND_CMD="kind"
for kind command
System tests will be run in CI via the system-test github action
KubeHound was created by the Adversary Simulation Engineering (ASE) team at Datadog:
With additional support from:
- Christophe Tafani-Dereeper @christophetd
We would also like to acknowledge the BloodHound team for pioneering the use of graph theory in offensive security and inspiring us to create this project.