You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Describe the bug
Using the default docker configuraion, I was unable to examine the KubeHound output using the example Jupyter notebook deployment. I had to make config changes to the Dockerfiles to get it to work. Overall, my changes would break on some docker setups I think, but I was wondering if there's a way to make the example dev deployment more robust...
First problem is that in the notebook docker container it refers to "host": "host.docker.internal" as the kubegraph host to connect to. However, by default, this hostname is not going to be added on Linux docker. Only by adding the following to the notebook docker-compose (see extra_hosts attribute being added, docker/for-linux#264 (comment)):
When I tried to run the notebook and connect to the kubegraph using host.docker.internal hostname (without these changes), then it would fail with "Name or service unknown" type of issue when trying to connect.
After adding this extra_hosts attribute, it was still not working correctly for me (although, it may be some firewall issue, I wasn't able to determine root cause here), the kubegraph docker image did not expose the port for me, port is (understandably) only exposed on localhost. Now my assumption was that this should be OK, since you are accessing the same host as your localhost by using host.docker.internal. However, it denied connection to that port, I think because it's explicitly configured on the docker-compose, to expose the port only on the local interface 127.0.0.1 IP.
I had to add the following ports to the docker-compose.dev.yaml file to make the port accessible through host.docker.internal from the notebook (172.17.0.1 is IP for my host machine on the docker network interface).
After these changes, I was finally able to run the example queries in the jupyter notebook and started seeing the output of KubeHound.
It also does not help if I put localhost or 127.0.0.1 in the notebook, for obvious reasons (in the context of the notebook, localhost is not the host machine).
To Reproduce
Steps to reproduce the behavior:
Clone repo
make kubehound
Check notebook, run queries (no need to actually run bin/kubehound, since it only populates the database)
See connection errors when getting to running first query
Expected behavior
That the demo, example setup works without extra manual steps needed.
Desktop:
OS: Ubuntu 22.04.4 LTS
Browser N/A
Version 22.04.4
Additional context
Docker version:
$ docker version
Client: Docker Engine - Community
Version: 20.10.23
API version: 1.41
Go version: go1.18.10
Git commit: 7155243
Built: Thu Jan 19 17:45:08 2023
OS/Arch: linux/amd64
Context: default
Experimental: true
Server: Docker Engine - Community
Engine:
Version: 26.0.0
API version: 1.45 (minimum version 1.24)
Go version: go1.21.8
Git commit: 8b79278
Built: Wed Mar 20 15:17:48 2024
OS/Arch: linux/amd64
Experimental: false
containerd:
Version: 1.6.28
GitCommit: ae07eda36dd25f8a1b98dfbf587313b99c0190bb
runc:
Version: 1.1.12
GitCommit: v1.1.12-0-g51d5e94
docker-init:
Version: 0.19.0
GitCommit: de40ad0
Docker compose & go installations:
$ docker compose version
Docker Compose version v2.26.1
$ go version
go version go1.22.1 linux/amd64
The text was updated successfully, but these errors were encountered:
Describe the bug
Using the default docker configuraion, I was unable to examine the KubeHound output using the example Jupyter notebook deployment. I had to make config changes to the Dockerfiles to get it to work. Overall, my changes would break on some docker setups I think, but I was wondering if there's a way to make the example dev deployment more robust...
First problem is that in the notebook docker container it refers to
"host": "host.docker.internal"
as the kubegraph host to connect to. However, by default, this hostname is not going to be added on Linux docker. Only by adding the following to the notebook docker-compose (seeextra_hosts
attribute being added, docker/for-linux#264 (comment)):When I tried to run the notebook and connect to the kubegraph using
host.docker.internal
hostname (without these changes), then it would fail with "Name or service unknown" type of issue when trying to connect.After adding this
extra_hosts
attribute, it was still not working correctly for me (although, it may be some firewall issue, I wasn't able to determine root cause here), the kubegraph docker image did not expose the port for me, port is (understandably) only exposed on localhost. Now my assumption was that this should be OK, since you are accessing the same host as yourlocalhost
by usinghost.docker.internal
. However, it denied connection to that port, I think because it's explicitly configured on the docker-compose, to expose the port only on the local interface127.0.0.1
IP.I had to add the following ports to the docker-compose.dev.yaml file to make the port accessible through
host.docker.internal
from the notebook (172.17.0.1
is IP for my host machine on the docker network interface).After these changes, I was finally able to run the example queries in the jupyter notebook and started seeing the output of KubeHound.
It also does not help if I put
localhost
or127.0.0.1
in the notebook, for obvious reasons (in the context of the notebook,localhost
is not the host machine).To Reproduce
Steps to reproduce the behavior:
make kubehound
bin/kubehound
, since it only populates the database)Expected behavior
That the demo, example setup works without extra manual steps needed.
Desktop:
OS: Ubuntu 22.04.4 LTS
Browser N/A
Version 22.04.4
Additional context
Docker version:
Docker compose & go installations:
The text was updated successfully, but these errors were encountered: