Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Use container IP address in server field in kubeconfig for kind cluster #111

Closed
font opened this issue Nov 14, 2018 · 12 comments · Fixed by #478
Closed

Use container IP address in server field in kubeconfig for kind cluster #111

font opened this issue Nov 14, 2018 · 12 comments · Fixed by #478
Assignees
Labels
kind/feature Categorizes issue or PR as related to a new feature. priority/important-soon Must be staffed and worked on either currently, or very soon, ideally in time for the next release.
Milestone

Comments

@font
Copy link

font commented Nov 14, 2018

When working with multiple clusters such as with https://github.com/kubernetes-sigs/federation-v2, we provide the cluster info to the federation controller running in one of the clusters, to talk to all the other clusters. That cluster info comes from the kubeconfig provided by the user. The current kind create cluster command will create a kubeconfig with the server field using localhost as shown below:

$ KUBECONFIG=$(kind get kubeconfig-path) kubectl config view | grep server
    server: https://localhost:38785

This makes it problematic when one kind cluster needs to access the kube API server in another kind cluster because localhost for each kind cluster refers to itself. Since the kind clusters share a common docker network bridge, we can use the IP address for each container. So what is needed is for the container IP address to be used in the server field in the kubeconfig for the kind cluster as shown below:

$ KUBECONFIG=$(kind get kubeconfig-path) kubectl config view | grep server
    server: https://172.17.0.2:38785

The host would still be able to talk to the kube API servers using their container IP addresses. Is this something that is desirable? Is there a reason to use localhost instead?

@munnerz
Copy link
Member

munnerz commented Nov 14, 2018

On OSX at least, it's not possible to access the docker container network 😞

If we want to make this configurable, there's 3 options I can think of:

  • Add a boolean/toggle to the kind Config type
  • Add a flag to kind create cluster to toggle the behaviour
  • Add a new kind get kubeconfig to grab a kubeconfig for an existing cluster, and add a flag to that

Not sure what's most desirable here? 😄

@BenTheElder
Copy link
Member

Each cluster has a different port for the API server so it should still be possible for them to talk to each other (?), the reason for localhost is that windows / mac do not support anything else as far as I know, because the container network is actually in a light VM.

@BenTheElder
Copy link
Member

It's also possible that using a single network bridge will be problematic in the future, once we get multiple nodes. Using localhost and a random port makes things pretty portable.

Can federation support keying off of the address + port instead of just the address instead?

It would be quite easy to create a kubeconfig with the container IP address when on linux, but so far I've preferred keeping the environment as consistent as possible. Ideally users should be able to replicate CI very closely on their local machines, which is a major goal for kind.

@font
Copy link
Author

font commented Nov 15, 2018

Each cluster has a different port for the API server so it should still be possible for them to talk to each other (?), the reason for localhost is that windows / mac do not support anything else as far as I know, because the container network is actually in a light VM.

On Linux, I don't think it's possible while using localhost because 127.0.0.1 is not routable from one container to another i.e. across the network bridge.

Can federation support keying off of the address + port instead of just the address instead?

We key off of the entire server field so we include the port. It's just that using localhost:<port_for_other_kind_cluster_api_server> returns a connection refused error because the local container (referred to by using localhost) is not listening on that port since it's part of the other cluster's container network. So the pod running inside the kind-1 cluster needs to have a routable IP address to the API server running in the kind-2 cluster.

@BenTheElder
Copy link
Member

Ah, right. Thanks. Will think about this some more, while you can trivially get the container IP and rewrite it currently, it would be ideal to make this work portably.

@BenTheElder BenTheElder added kind/feature Categorizes issue or PR as related to a new feature. priority/important-soon Must be staffed and worked on either currently, or very soon, ideally in time for the next release. labels Nov 22, 2018
@fabriziopandini
Copy link
Member

@BenTheElder @font
Currently I'm working on multi node and I want to be sure my effort is compliant with federation test requirements

What I'm proposing for multi-node is to

  • always use IP:6443 as API server address for "infra-docker" communication. This allows multiple cluster but gets out of the way random port for node join/API server load balancing
  • only when copying the kubeconfig on the host machine, use localhost + random port

What described above is a small variation of current state, and if I got this thread right, it neither fix nor prevent fixing the above issue for federation test. is that right or do you see problems in what I'm proposing?

Instead, if I can give my two cents on how to address federation requirement, I think that a possible solution is to add an option for retrieving the raw kubeconfig (kind get kubeconfig --raw, that will give you exactly the kubeconfig that exist in docker/with IP:6443), that can be reused by federation controller - assuming it will be run in docker too -. Wdyt?

@neolit123
Copy link
Member

i think we should make the random port a UX knob.

@BenTheElder
Copy link
Member

--raw makes sense, BUT, we need the random port outside the container to handle docker for mac / windows and I'd rather minimize code paths because CI should match local usage for reproducibility as much as possible..

@BenTheElder
Copy link
Member

for random port see #178, we should allow setting the address as well, we'll set a random port if you don't specify a port.

@BenTheElder BenTheElder self-assigned this Feb 11, 2019
@BenTheElder BenTheElder added this to the 1.0 milestone Feb 11, 2019
@BenTheElder
Copy link
Member

neglected to update here: the random port is a config knob now, and it is always 6443 inside the network, we still need to add a command to export the kubeconfig, with a --raw option or similar.

@BenTheElder BenTheElder modified the milestones: 1.0, 0.4 May 3, 2019
@aojea
Copy link
Contributor

aojea commented May 3, 2019

I think that as a workaround this will work
docker exec -it $CLUSTER_NAME-control-plane cat /etc/kubernetes/admin.conf

@BenTheElder
Copy link
Member

kind get kubeconfig --internal > internal-kubeconfig after #478

stg-0 pushed a commit to stg-0/kind that referenced this issue May 26, 2023
…ort_deploy_tigera_operator

fix: backport fix deploy_tigera_operator
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/feature Categorizes issue or PR as related to a new feature. priority/important-soon Must be staffed and worked on either currently, or very soon, ideally in time for the next release.
Projects
None yet
Development

Successfully merging a pull request may close this issue.

6 participants