Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Feature] add --hostnetwork flag to connect cluster to host network #53

Closed
wants to merge 2 commits into from

Conversation

mash-graz
Copy link

@mash-graz mash-graz commented May 17, 2019

Add option to use the host network for the the server node to make LB/ingress connectivity accessible without additional port mappings.

@zeerorg
Copy link
Collaborator

zeerorg commented May 19, 2019

I have mixed thoughts on adding OS specific functionality. @andyz-dev and @iwilltry42 might be able to help better with it. Another thing that is possible on linux is that you can call your ingress service directly using container ip address.

@mash-graz
Copy link
Author

I have mixed thoughts on adding OS specific functionality.

the docker --network host option isn't an exotic linux specific feature, but just not supported on some other platforms, where docker can only be utilized by means of limited virtualization workarounds. nevertheless it's a rather efficient mode of operation for some tasks.

in this particular case it simply helps to let k3d behave more like the underlying k3s application resp. keep the control over the network setup on kubernetes side! that's IMHO a very important advantage of this approach.

nevertheless i wouldn't deny, that we have to test this patch on the different platforms and perhaps improve it slightly to minimize os specific issues -- perhaps only enabling this option, if a docker host network is present on the actual machine, etc.

@iwilltry42
Copy link
Member

Hi @mash-graz , thanks for the PR 👍
I can test this on Ubuntu and WSL soon. If you don't mind, I wouldn't include it in the next major release (v2.0.0) yet and add it to the subsequent minor release (v2.1.0), if everything goes well?
@andyz-dev and @andyz-dev , can you test this on other systems?

@mash-graz
Copy link
Author

mash-graz commented May 19, 2019

If you don't mind, I wouldn't include it in the next major release (v2.0.0) yet and add it to the subsequent minor release (v2.1.0), if everything goes well?

at the end it's a decision of the maintainers, if they like/accept code contributions, which i simply do not want to debate, but from more objective/rational point of view, i really can't see any benefit of such an postponement.

the changes of this PR do not alter the function of any already existing feature or actual operation of k3d. they only add a complimentary mode of network exposure. in fact a very simple one, which could be enabled in any similar wrapper script by very common docker command line flags otherwise.

for good reason it's exactly this kind of solution, which is in the meanwhile utilized as default network mechanism in RootlessKit, because it combines simplicity with significant technical advantages, especially in case of nested sandboxing and save operation without unnecessary access privileges.

if this PR [against expectation] should interfere with any already existing functionality in k3d, it should be really easy to fix and just caused by stupid mistakes -- i.e. this case should be really avoidable by simple code review! --, however i wouldn't expect, that the added feature also works smoothly out of the box for all possible corner cases. this simple needs more practical survey and incremental improvement. in this respect i was more motivated, to figure out a more general solution, which works at all. the actual implementation represents more a first very simple and easy readable draft than a technical perfect final realization. i wouldn't be surprised, if someone reports practical issues concerning this kind of shortages, or maintainers, which simply want to see some relevant code improvements in advance, but deferring the inclusion, just because of version numbering semantics, IMHO isn't a sound argument in this particular case.

@andyz-dev
Copy link
Contributor

It does not work on MACos. May be a minor issue, but I did not debug.

$ bin/k3d create --hostnetwork --workers 2
2019/05/19 13:54:19 Created cluster network with ID c6d42db7f4470d29b073669acee25f47fe2f76924fcb5feb5cdb40f9dde3be2f
2019/05/19 13:54:19 Creating cluster [k3s-default]
2019/05/19 13:54:19 Creating server using docker.io/rancher/k3s:v0.5.0...
2019/05/19 13:54:20 Booting 2 workers for cluster k3s-default
2019/05/19 13:54:20 Created worker with ID 18a6476e6832486de7f684a571c74374d7d81533e073e74d235ec218359ad0c7
2019/05/19 13:54:21 Created worker with ID 127b042041fef888043016eff059d73e8cda49eb11bdc46ab3817e31eb94545a
2019/05/19 13:54:21 SUCCESS: created cluster [k3s-default]
2019/05/19 13:54:21 You can now use the cluster with:

export KUBECONFIG="$(bin/k3d get-kubeconfig --name='k3s-default')"
kubectl cluster-info
export KUBECONFIG="$(bin/k3d get-kubeconfig --name='k3s-default')"
azhou@Andys-MacBook-Pro:~/projs/review/k3d$ kubectl get nodes
The connection to the server localhost:6443 was refused - did you specify the right host or port?

@mash-graz
Copy link
Author

mash-graz commented May 19, 2019

It does not work on MACos. May be a minor issue, but I did not debug.

thanks for testing!

could you please test, if a host network is available on your machine -- i.e. by using the command:

docker network ls

in dockers official documentation you just find some notes, that this networking mode isn't supported on all platforms, but it isn't clear, if the host network isn't listed at all on the affected platforms or if it's still there but rather doesn't work in the expected manner, etc.

i therefore had to guess, that trying to starting the server container with an NetworkMode = "host" entry in the hostConfig should thow an error and stop any further initialization of the cluster...
but evidently that doesn't happen in your case. :(

@andyz-dev
Copy link
Contributor

Here you are:

azhou@Andys-MacBook-Pro:~/projs/k3d$ docker network ls
NETWORK ID NAME DRIVER SCOPE
d42746db136d bridge bridge local
8bab15d53270 host host local
117356159e19 none null local

@mash-graz
Copy link
Author

mash-graz commented May 20, 2019

NETWORK ID NAME DRIVER SCOPE
d42746db136d bridge bridge local
8bab15d53270 host host local
117356159e19 none null local

thanks for testing!

i also tested your command sequence on my linux machines, and they worked flawless...

it's rather disappointing, that docker doesn't always work very well and equal on all platforms. this doesn't only affect this particular issue (see also: docker/for-mac#2716) but also some other minor troubles.

i don't see any workaround for this os specific implementation defects, but it shouldn't stop the legitimate and very useful application of this feature on dockers main platform, which i would still associate with the linux operating system.

as already proposed, it could make some sense, to simply disable this option on unsupported platforms to minimize the irritation for end users -- but on the other hand it doesn't make much harm, if they have to learn to recognize and work around all this actual shortcomings of the docker implementation affecting their actual platform. ;)

@iwilltry42
Copy link
Member

iwilltry42 commented May 20, 2019

Hey @mash-graz, I am in no way trying to postpone your contribution just because I like to. I'd actually be super happy if we'd have a new way for easier networking, even if it's only on a single platform.
The reason for my comment was simply that the original plan for v2.0.0 was to create a stable interface with proper documentation first instead of adding new features.
But we already discussed this and changed plans to first do another few minor releases, where we can add more small features.
I'm testing this today 👍

@iwilltry42
Copy link
Member

Hey there, sorry for the super late reply.
I just tested this on my machine (Linux) and it fails with the following logs

time="2019-05-22T09:06:57.309483084Z" level=info msg="Starting k3s v0.5.0 (8c0116dd)"
time="2019-05-22T09:06:58.983060981Z" level=info msg="Running kube-apiserver --allow-privileged=true --service-account-signing-key-file=/var/lib/rancher/k3s/server/tls/service.key --advertise-port=6445 --api-audiences=unknown --proxy-client-cert-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.crt --watch-cache=false --insecure-port=0 --secure-port=6444 --bind-address=127.0.0.1 --service-account-key-file=/var/lib/rancher/k3s/server/tls/service.key --requestheader-allowed-names=kubernetes-proxy --proxy-client-key-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.key --cert-dir=/var/lib/rancher/k3s/server/tls/temporary-certs --service-account-issuer=k3s --basic-auth-file=/var/lib/rancher/k3s/server/cred/passwd --kubelet-client-key=/var/lib/rancher/k3s/server/tls/token-node.key --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --authorization-mode=Node,RBAC --advertise-address=127.0.0.1 --tls-cert-file=/var/lib/rancher/k3s/server/tls/localhost.crt --tls-private-key-file=/var/lib/rancher/k3s/server/tls/localhost.key --kubelet-client-certificate=/var/lib/rancher/k3s/server/tls/token-node-1.crt --requestheader-client-ca-file=/var/lib/rancher/k3s/server/tls/request-header-ca.crt --requestheader-username-headers=X-Remote-User --service-cluster-ip-range=10.43.0.0/16"
E0522 09:06:59.441854       1 prometheus.go:138] failed to register depth metric admission_quota_controller: duplicate metrics collector registration attempted
E0522 09:06:59.442078       1 prometheus.go:150] failed to register adds metric admission_quota_controller: duplicate metrics collector registration attempted
E0522 09:06:59.442131       1 prometheus.go:162] failed to register latency metric admission_quota_controller: duplicate metrics collector registration attempted
E0522 09:06:59.442159       1 prometheus.go:174] failed to register work_duration metric admission_quota_controller: duplicate metrics collector registration attempted
E0522 09:06:59.442183       1 prometheus.go:189] failed to register unfinished_work_seconds metric admission_quota_controller: duplicate metrics collector registration attempted
E0522 09:06:59.442206       1 prometheus.go:202] failed to register longest_running_processor_microseconds metric admission_quota_controller: duplicate metrics collector registration attempted
W0522 09:06:59.480242       1 genericapiserver.go:315] Skipping API batch/v2alpha1 because it has no resources.
W0522 09:06:59.486351       1 genericapiserver.go:315] Skipping API node.k8s.io/v1alpha1 because it has no resources.
E0522 09:06:59.497877       1 prometheus.go:138] failed to register depth metric admission_quota_controller: duplicate metrics collector registration attempted
E0522 09:06:59.497902       1 prometheus.go:150] failed to register adds metric admission_quota_controller: duplicate metrics collector registration attempted
E0522 09:06:59.497952       1 prometheus.go:162] failed to register latency metric admission_quota_controller: duplicate metrics collector registration attempted
E0522 09:06:59.498014       1 prometheus.go:174] failed to register work_duration metric admission_quota_controller: duplicate metrics collector registration attempted
E0522 09:06:59.498029       1 prometheus.go:189] failed to register unfinished_work_seconds metric admission_quota_controller: duplicate metrics collector registration attempted
E0522 09:06:59.498041       1 prometheus.go:202] failed to register longest_running_processor_microseconds metric admission_quota_controller: duplicate metrics collector registration attempted
time="2019-05-22T09:06:59.503233763Z" level=info msg="Running kube-scheduler --port=10251 --bind-address=127.0.0.1 --secure-port=0 --kubeconfig=/var/lib/rancher/k3s/server/cred/kubeconfig-system.yaml --leader-elect=false"
time="2019-05-22T09:06:59.503454773Z" level=info msg="Running kube-controller-manager --allocate-node-cidrs=true --bind-address=127.0.0.1 --root-ca-file=/var/lib/rancher/k3s/server/tls/token-ca.crt --port=10252 --secure-port=0 --kubeconfig=/var/lib/rancher/k3s/server/cred/kubeconfig-system.yaml --service-account-private-key-file=/var/lib/rancher/k3s/server/tls/service.key --cluster-cidr=10.42.0.0/16 --leader-elect=false"
E0522 09:06:59.504896       1 controller.go:148] Unable to remove old endpoints from kubernetes service: StorageError: key not found, Code: 1, Key: /registry/masterleases/127.0.0.1, ResourceVersion: 0, AdditionalErrorMsg: 
W0522 09:06:59.506522       1 authorization.go:47] Authorization is disabled
W0522 09:06:59.506564       1 authentication.go:55] Authentication is disabled
time="2019-05-22T09:06:59.550520899Z" level=info msg="Creating CRD listenerconfigs.k3s.cattle.io"
time="2019-05-22T09:06:59.553144917Z" level=info msg="Creating CRD addons.k3s.cattle.io"
time="2019-05-22T09:06:59.554051248Z" level=info msg="Creating CRD helmcharts.k3s.cattle.io"
time="2019-05-22T09:06:59.555460528Z" level=info msg="Waiting for CRD listenerconfigs.k3s.cattle.io to become available"
time="2019-05-22T09:07:00.059284723Z" level=info msg="Done waiting for CRD listenerconfigs.k3s.cattle.io to become available"
time="2019-05-22T09:07:00.059330738Z" level=info msg="Waiting for CRD addons.k3s.cattle.io to become available"
time="2019-05-22T09:07:00.561539798Z" level=info msg="Done waiting for CRD addons.k3s.cattle.io to become available"
time="2019-05-22T09:07:00.561578193Z" level=info msg="Waiting for CRD helmcharts.k3s.cattle.io to become available"
time="2019-05-22T09:07:01.064924226Z" level=info msg="Done waiting for CRD helmcharts.k3s.cattle.io to become available"
time="2019-05-22T09:07:01.069537582Z" level=info msg="Listening on :6556"
time="2019-05-22T09:07:01.799061051Z" level=info msg="Node token is available at /var/lib/rancher/k3s/server/node-token"
time="2019-05-22T09:07:01.799106203Z" level=info msg="To join node to cluster: k3s agent -s https://10.32.122.200:6556 -t ${NODE_TOKEN}"
time="2019-05-22T09:07:01.802583324Z" level=info msg="Writing static file: /var/lib/rancher/k3s/server/static/charts/traefik-1.64.0.tgz"
time="2019-05-22T09:07:01.803021526Z" level=info msg="Writing manifest: /var/lib/rancher/k3s/server/manifests/traefik.yaml"
time="2019-05-22T09:07:01.803315416Z" level=info msg="Writing manifest: /var/lib/rancher/k3s/server/manifests/coredns.yaml"
time="2019-05-22T09:07:02.423338295Z" level=info msg="Wrote kubeconfig /output/kubeconfig.yaml"
time="2019-05-22T09:07:02.423384663Z" level=info msg="Run: k3s kubectl"
time="2019-05-22T09:07:02.423400416Z" level=info msg="k3s is up and running"
time="2019-05-22T09:07:02.863006049Z" level=info msg="Logging containerd to /var/lib/rancher/k3s/agent/containerd/containerd.log"
time="2019-05-22T09:07:02.864025557Z" level=info msg="Running containerd -c /var/lib/rancher/k3s/agent/etc/containerd/config.toml -a /run/k3s/containerd/containerd.sock --state /run/k3s/containerd --root /var/lib/rancher/k3s/agent/containerd"
time="2019-05-22T09:07:02.864601167Z" level=info msg="Waiting for containerd startup: rpc error: code = Unavailable desc = all SubConns are in TransientFailure, latest connection error: connection error: desc = \"transport: Error while dialing dial unix /run/k3s/containerd/containerd.sock: connect: no such file or directory\""
W0522 09:07:02.954723       1 lease.go:222] Resetting endpoints for master service "kubernetes" to [127.0.0.1]
W0522 09:07:03.704964       1 controllermanager.go:445] Skipping "root-ca-cert-publisher"
time="2019-05-22T09:07:03.803977262Z" level=info msg="Handling backend connection request [k3d-hostnetwork-worker-0]"
time="2019-05-22T09:07:03.867898836Z" level=warning msg="failed to start br_netfilter module"
time="2019-05-22T09:07:03.868501265Z" level=info msg="Connecting to wss://localhost:6556/v1-k3s/connect"
time="2019-05-22T09:07:03.868527452Z" level=info msg="Connecting to proxy" url="wss://localhost:6556/v1-k3s/connect"
time="2019-05-22T09:07:03.902031245Z" level=info msg="Handling backend connection request [<hostname>]"
time="2019-05-22T09:07:03.903075420Z" level=warning msg="Disabling CPU quotas due to missing cpu.cfs_period_us"
time="2019-05-22T09:07:03.903170076Z" level=info msg="Running kubelet --read-only-port=0 --anonymous-auth=false --allow-privileged=true --kubeconfig=/var/lib/rancher/k3s/agent/kubeconfig.yaml --container-runtime-endpoint=unix:///run/k3s/containerd/containerd.sock --cgroup-driver=cgroupfs --cluster-domain=cluster.local --cluster-dns=10.43.0.10 --hostname-override=<hostname> --eviction-hard=imagefs.available<5%,nodefs.available<5% --eviction-minimum-reclaim=imagefs.available=10%,nodefs.available=10% --seccomp-profile-root=/var/lib/rancher/k3s/agent/kubelet/seccomp --client-ca-file=/var/lib/rancher/k3s/agent/client-ca.pem --tls-cert-file=/var/lib/rancher/k3s/agent/token-node.crt --authentication-token-webhook=true --cni-conf-dir=/var/lib/rancher/k3s/agent/etc/cni/net.d --cni-bin-dir=/bin --address=0.0.0.0 --tls-private-key-file=/var/lib/rancher/k3s/agent/token-node.key --cert-dir=/var/lib/rancher/k3s/agent/kubelet/pki --healthz-bind-address=127.0.0.1 --fail-swap-on=false --authorization-mode=Webhook --resolv-conf=/etc/resolv.conf --serialize-image-pulls=false --root-dir=/var/lib/rancher/k3s/agent/kubelet --container-runtime=remote --cpu-cfs-quota=false"
Flag --allow-privileged has been deprecated, will be removed in a future version
W0522 09:07:03.906695       1 server.go:214] WARNING: all flags other than --config, --write-config-to, and --cleanup are deprecated. Please begin using a config file ASAP.
W0522 09:07:03.908714       1 proxier.go:480] Failed to read file /lib/modules/5.0.0-15-generic/modules.builtin with error open /lib/modules/5.0.0-15-generic/modules.builtin: no such file or directory. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules
W0522 09:07:03.909051       1 proxier.go:493] Failed to load kernel module ip_vs with modprobe. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules
W0522 09:07:03.910394       1 info.go:52] Couldn't collect info from any of the files in "/etc/machine-id,/var/lib/dbus/machine-id"
W0522 09:07:03.911415       1 proxier.go:493] Failed to load kernel module ip_vs_rr with modprobe. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules
W0522 09:07:03.911716       1 proxier.go:493] Failed to load kernel module ip_vs_wrr with modprobe. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules
W0522 09:07:03.912005       1 proxier.go:493] Failed to load kernel module ip_vs_sh with modprobe. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules
W0522 09:07:03.912261       1 proxier.go:493] Failed to load kernel module nf_conntrack with modprobe. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules
time="2019-05-22T09:07:03.914063113Z" level=warning msg="Running modprobe ip_vs failed with message: `modprobe: can't change directory to '5.0.0-15-generic': No such file or directory`, error: exit status 1"
W0522 09:07:03.921435       1 probe.go:268] Flexvolume plugin directory at /usr/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating.
E0522 09:07:03.921885       1 server.go:677] Starting health server failed: listen tcp 127.0.0.1:10248: bind: address already in use
F0522 09:07:03.923644       1 server.go:149] listen tcp 0.0.0.0:10250: bind: address already in use
goroutine 8180 [running]:
github.com/rancher/k3s/vendor/k8s.io/klog.stacks(0xc0001d9b00, 0xc00457f600, 0x64, 0x1e2)
        /go/src/github.com/rancher/k3s/vendor/k8s.io/klog/klog.go:828 +0xb1
github.com/rancher/k3s/vendor/k8s.io/klog.(*loggingT).output(0x5f95ca0, 0xc000000003, 0xc000eeeee0, 0x5cbe048, 0x9, 0x95, 0x0)
        /go/src/github.com/rancher/k3s/vendor/k8s.io/klog/klog.go:779 +0x2d9
github.com/rancher/k3s/vendor/k8s.io/klog.(*loggingT).printDepth(0x5f95ca0, 0xc000000003, 0x1, 0xc0048abe90, 0x1, 0x1)
        /go/src/github.com/rancher/k3s/vendor/k8s.io/klog/klog.go:669 +0x12b
github.com/rancher/k3s/vendor/k8s.io/klog.(*loggingT).print(...)
        /go/src/github.com/rancher/k3s/vendor/k8s.io/klog/klog.go:660
github.com/rancher/k3s/vendor/k8s.io/klog.Fatal(...)
        /go/src/github.com/rancher/k3s/vendor/k8s.io/klog/klog.go:1189
github.com/rancher/k3s/vendor/k8s.io/kubernetes/pkg/kubelet/server.ListenAndServeKubeletServer(0x3c59be0, 0xc004792000, 0x3bbe9e0, 0xc002294cc0, 0xc0017ff360, 0x10, 0x10, 0x280a, 0xc002ff9c50, 0x3b98ee0, ...)
        /go/src/github.com/rancher/k3s/vendor/k8s.io/kubernetes/pkg/kubelet/server/server.go:149 +0x45c
github.com/rancher/k3s/vendor/k8s.io/kubernetes/pkg/kubelet.(*Kubelet).ListenAndServe(0xc004792000, 0xc0017ff360, 0x10, 0x10, 0x280a, 0xc002ff9c50, 0x3b98ee0, 0xc002d52e10, 0xc004600001)
        /go/src/github.com/rancher/k3s/vendor/k8s.io/kubernetes/pkg/kubelet/kubelet.go:2175 +0xec
created by github.com/rancher/k3s/vendor/k8s.io/kubernetes/cmd/kubelet/app.startKubelet
        /go/src/github.com/rancher/k3s/vendor/k8s.io/kubernetes/cmd/kubelet/app/server.go:1000 +0x1e2
time="2019-05-22T09:07:04.055102438Z" level=info msg="waiting for node <hostname>: nodes \"<hostname>\" not found"
time="2019-05-22T09:07:04.056381307Z" level=info msg="Updated coredns node hosts entry [172.19.0.2 k3d-hostnetwork-worker-0]"

Since I'm on a conference, I unfortunately don't have the time to debug this.

@mash-graz
Copy link
Author

I just tested this on my machine (Linux) and it fails with the following logs

could you please report the used command line options to reproduce this behavior?

@iwilltry42
Copy link
Member

I ran bin/k3d create -n hostnetwork -p 6556 --hostnetwork --workers 2
(actually also once without the -p)

@mash-graz
Copy link
Author

thanks! -- i'll try to reproduce/debug it...

@iwilltry42
Copy link
Member

BTW:

  • Docker: v18.09.6
  • Ubuntu: 19.04

@mash-graz
Copy link
Author

mash-graz commented May 22, 2019

o.k. -- the issue is unfortunately easy to reproduce! :(

it's always happening, when you try to start more then one cluster resp. another additional server on your machine. in this case, the default k3s network ports will clash!

E0522 09:07:03.921885       1 server.go:677] Starting health server failed: listen tcp 127.0.0.1:10248: bind: address already in use
F0522 09:07:03.923644       1 server.go:149] listen tcp 0.0.0.0:10250: bind: address already in use

it's nothing unexpected -- just the same behavior as if you would try to start two docker instances of a web servers on your host, and both want to utilize port 80. it's more or less obvious, that this doesn't work in case of utilizing --net host.

this kind of conflicts were the reason, why i limited the network exposure via --net host to the server, because otherwise the port requirements of each worker would trigger similar clashes.

but unfortunately it will also happen in case of multiply servers resp. clusters on the same machine, if we do not reconfigure k3s for each instance... (which doesn't look very desirable to me)

@iwilltry42
Copy link
Member

Actually, I didn't have any other container or cluster running at the same time.

@iwilltry42
Copy link
Member

Just saw that it conflicted with vanilla k3s and microk8s running on my host.
I stopped both and it works now 👍

@mash-graz
Copy link
Author

Just saw that it conflicted with vanilla k3s and microk8s running on my host. I stopped both and it works now

hmm... nevertheless it demonstrated a grave issue!

it's really hard to decide, if this drawbacks/limitations are still compatible with the main goal resp. general design of k3d?

@iwilltry42
Copy link
Member

I agree... it could lead to lots of "unsolvable" issues, since re-configuring k3s per node is not really an option.
Though it's a neat solution for Linux systems which are clean of tools that might block the required ports...

@iwilltry42
Copy link
Member

@nunix, since I now know that you like experimenting with k3d on WSL, you might want to test this PR and see if there's some way to get this approach to work in WSL 😃

@iwilltry42 iwilltry42 changed the title option to use the host nework for the the server node to make LB/ingress connectivty accessible without additional port mappings. [Feature] add --hostnetwork flag to connect cluster to host network May 27, 2019
@iwilltry42 iwilltry42 linked an issue Apr 11, 2020 that may be closed by this pull request
@iwilltry42
Copy link
Member

During my work on #220 , I tried using the host network and the k3s server does not even start, even with no conflicting services on the host system.

time="2020-04-14T15:35:04.356297107Z" level=fatal msg="apiserver exited: Unable to find suitable network address.error='no default routes found in \"/proc/net/route\" or \"/proc/net/ipv6_route\"'. Try to set the AdvertiseAddress directly or provide a valid BindAddress to fix this."

Is this issue still valid after all? 🤔

@mash-graz
Copy link
Author

mash-graz commented Apr 14, 2020

During my work on #220 , I tried using the host network and the k3s server does not even start, even with no conflicting services on the host system.

i stopped utilizing k3d after this rather disappointing network design decisions and use simple docker-compose based setups again for my purposes. (e.g.: https://gitlab.com/mur-at-public/kube)

@iwilltry42
Copy link
Member

i stopped utilizing k3d after this rather disappointing network design decisions and use simple docker-compose based setups again for my purposes. (e.g.: https://gitlab.com/mur-at-public/kube)

I'd love to get some more feedback on this.
What are the disappointing network design decisions that you mentioned, apart from not supporting hostnet mode?

Regarding this PR:
As far as I can see, there are few scenarios in which one would want to use the host net.
Fair enough, there are some scenarios, where it's a nice to have feature and I'd like to support it, but there are quite a lot of cases one has to consider when using host net, e.g.

  • interference with existing servers on the host
  • not running more than one worker
  • port collisions
  • only one single platform where this is really working

I'd rather have a --network NET flag, where host can be a value and handles those cases instead of a flag that's only working for some Linux users 🤔
Anyway, this flag will be available in k3d v3 and maybe you want to try it there.
Nothing to say against docker-compose though if it's OK for your use cases 👍

@iwilltry42 iwilltry42 changed the base branch from master to master-v1 May 15, 2020 08:00
@iwilltry42
Copy link
Member

Closing this due to inactivity and in favor of k3d v3 ✔️

@iwilltry42 iwilltry42 closed this May 28, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants