Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

kind get kubeconfig gives unclear error when --name is missing #2205

Closed
gprossliner opened this issue Apr 19, 2021 · 8 comments
Closed

kind get kubeconfig gives unclear error when --name is missing #2205

gprossliner opened this issue Apr 19, 2021 · 8 comments
Labels
kind/bug Categorizes issue or PR as related to a bug.

Comments

@gprossliner
Copy link
Contributor

What happened:

I'm not able to get the (internal or external) kubeconfig for explicitly named clusters.
I'm not sure if this is a bug, or if I'm doing something wrong, but at least I don't expect it to fail this way.

Little Background:

  • I use kind to test KubeFed, so I need the individual kind Clusters to talk to each other.
  • This is not possible over https://127.0.0.1:randomport, but I came over the kind get kubeconfig --internal command,
    to get https://containername:6443 endpoints for the kubeconfig.
  • I got this error:
$ kind get kubeconfig 
ERROR: could not locate any control plane nodes
  • After playing with kind so some time, I figured out, that it works for clusters created without --name:
# Start with no clusters
$ kind get clusters
No kind clusters found.

# Create a cluster, and get the kubeconfig: OK
$ kind create cluster
$ kind get kubeconfig --internal
apiVersion: v1 ...

# Delete the cluster
$ kind delete cluster

# Create a named cluster, and get the kubeconfig
$ kind create cluster --name c
$ kind get kubeconfig 
ERROR: could not locate any control plane nodes

What you expected to happen:

I expected the kubeconfig to be retrieved, regardless if I use an explicit name, or if kind uses it's default name.

How to reproduce it (as minimally and precisely as possible):

Use the script above.

Anything else we need to know?:

Environment:

  • kind version: (use kind version): kind v0.10.0 go1.15.7 linux/amd64
  • Kubernetes version: (use kubectl version): Client Version: version.Info{Major:"1", Minor:"21", GitVersion:"v1.21.0", GitCommit:"cb303e613a121a29364f75cc67d3d580833a7479", GitTreeState:"clean", BuildDate:"2021-04-08T16:31:21Z", GoVersion:"go1.16.1", Compiler:"gc", Platform:"linux/amd64"}
  • Docker version: (use docker info):
Client:
 Debug Mode: false

Server:
 Containers: 1
  Running: 1
  Paused: 0
  Stopped: 0
 Images: 11
 Server Version: 19.03.8
 Storage Driver: overlay2
  Backing Filesystem: <unknown>
  Supports d_type: true
  Native Overlay Diff: true
 Logging Driver: json-file
 Cgroup Driver: cgroupfs
 Plugins:
  Volume: local
  Network: bridge host ipvlan macvlan null overlay
  Log: awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog
 Swarm: inactive
 Runtimes: runc
 Default Runtime: runc
 Init Binary: docker-init
 containerd version: 
 runc version: 
 init version: 
 Security Options:
  apparmor
  seccomp
   Profile: default
 Kernel Version: 5.8.0-48-generic
 Operating System: Ubuntu 20.04.2 LTS
 OSType: linux
 Architecture: x86_64
 CPUs: 8
 Total Memory: 15.51GiB
 Name: WDAT200263
 ID: 3X5Q:RLUA:GLQL:MDNH:Y7FU:2UCQ:PDKG:ZPSK:YR6Y:IYO2:76GL:7AZO
 Docker Root Dir: /var/lib/docker
 Debug Mode: false
 Username: worlddirect
 Registry: https://index.docker.io/v1/
 Labels:
 Experimental: false
 Insecure Registries:
  127.0.0.0/8
 Live Restore Enabled: false```
- OS (e.g. from `/etc/os-release`):
```NAME="Ubuntu"
VERSION="20.04.2 LTS (Focal Fossa)"
ID=ubuntu
ID_LIKE=debian
PRETTY_NAME="Ubuntu 20.04.2 LTS"
VERSION_ID="20.04"
HOME_URL="https://www.ubuntu.com/"
SUPPORT_URL="https://help.ubuntu.com/"
BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
VERSION_CODENAME=focal
UBUNTU_CODENAME=focal```
@gprossliner gprossliner added the kind/bug Categorizes issue or PR as related to a bug. label Apr 19, 2021
@BenTheElder
Copy link
Member

you need kind get kubeconfig --name=foo, in general to interact with a non-default cluster name you must supply it.

we should improve the error message to note which cluster it was looking for.

@BenTheElder BenTheElder changed the title kind get kubeconfig fails for custom named clusters kind get kubeconfig gives unclear error when --name is missing Apr 19, 2021
@BenTheElder BenTheElder self-assigned this Apr 19, 2021
@gprossliner
Copy link
Contributor Author

Ok thanks! This obviously was my mistake.

Because I need this for kubefed, it would be nice if I can get ine kubeconfig for multiple kind clusters, as this is needed for kubefedctl. Can I specify the —name flag multiple times?

I would like to ask, why you desided to implement a flag for the name, instead of a positional argument, like kind create cluster test1, instead of kind create cluster —name=test1? It looks a bit strange, comparing to other Kubernetes tools like kubectl. For supporting the default value?

Thanks a lot for your impressive work btw!

@BenTheElder
Copy link
Member

Because I need this for kubefed, it would be nice if I can get ine kubeconfig for multiple kind clusters, as this is needed for kubefedctl. Can I specify the —name flag multiple times?

You have to invoke the command multiple times for this currently. We could add kind get kubeconfigs / kind export kubeconfigs perhaps? with usage like kind get kubeconfig [--all] [clustername1] [clustername2] ...

I would like to ask, why you desided to implement a flag for the name, instead of a positional argument, like kind create cluster test1, instead of kind create cluster —name=test1? It looks a bit strange, comparing to other Kubernetes tools like kubectl. For supporting the default value?

  • we wanted it to be optional for kind create cluster so we started with a flag, we honestly did not expect so many users to create multiple clusters at once given the overhead
  • minikube used --profile for the same concept
  • we thought of it most like --namespace in kubectl

In retrospect, I agree, but I'm not sure it's worth the breaking change for all the existing users.
It would be a more painful change for e.g. kind load docker-image I think.

If I didn't fear how big of a breaking change it would be, I would also like if you could set your current context in kubeconfig and have that become the default cluster, but I think that's perhaps too magical and there are so many scripts out there using --name. At least we have --name consistently across single cluster commands.

Thanks a lot for your impressive work btw!

Thank you 😅

@gprossliner
Copy link
Contributor Author

You have to invoke the command multiple times for this currently. We could add kind get kubeconfigs / kind export kubeconfigs perhaps? with usage like kind get kubeconfig [--all] [clustername1] [clustername2] ...

Invoking this multiple times would need to merge them later on for the kubefedctl scenario, because you need to specify --cluster-context and --host-cluster-context, and it's not possible to specify individual files https://github.com/kubernetes-sigs/kubefed/blob/master/docs/cluster-registration.md.

I think kind get kubeconfigs or kind get kubeconfig --all would be fine, even when no individual clusters can be specified. I see no problem having a cluster in my kubeconfig which I don't use. Personally I think kind get kubeconfig --all is better the kind get kubeconfigs, because 1. the latter reads like there are multiple files, and 2. it's just a variation of the command, and not a new one.

In retrospect, I agree, but I'm not sure it's worth the breaking change for all the existing users.
It would be a more painful change for e.g. kind load docker-image I think.

If I didn't fear how big of a breaking change it would be, I would also like if you could set your current context in kubeconfig and have that become the default cluster, but I think that's perhaps too magical and there are so many scripts out there using --name. At least we have --name consistently across single cluster commands.

I'm very aware that this would break a lot of users, and may be not worth the change.

For the kind load docker-image (which I would see as a kind of sub-resource), I don't know a kubectl command that has comparable semantics (adding a subresource). For the /scale subresource, there is a distinct command "scale".
If you apply this semantics, this should give something like kind load-image clustername --image image .

@BenTheElder
Copy link
Member

Invoking this multiple times would need to merge them

If you use export instead of get kind will handle the merging for you.

Personally I think kind get kubeconfig --all is better the kind get kubeconfigs, because 1. the latter reads like there are multiple files, and 2. it's just a variation of the command, and not a new one.

It is getting multiple files and --all versus --name would be our first and only disjoint / banned flag combination. We've added a similar plurality before with kind delete clusters

kind load-image clustername --image image

Having a flag you can never not specify is pretty terrible CLI UX. We always need an image.

I don't think it's worth breaking everyone at this point. There's a happy path for a single cluster and a marginally more annoying path for multiple.

In addition to the consistent flag you can experimentally use the environment variable instead.

@gprossliner
Copy link
Contributor Author

If you use export instead of get kind will handle the merging for you.
Merging works fine. Thanks. But I can't specify --internal like in kind get kubeconfig:

ERROR: unknown flag: --internal

kind load-image clustername --image image

Having a flag you can never not specify is pretty terrible CLI UX. We always need an image.

I agree. I was suggesting it, because kubectl also have a big number of required flags, like --replicas for the kubectl scale, or --image for the kubectl run command.

I don't think it's worth breaking everyone at this point.
I totally understand this decision.

@BenTheElder
Copy link
Member

we should still update that error message to say something like could not find any control-plane nodes for cluster %q clusterName

@BenTheElder BenTheElder removed their assignment Jun 24, 2021
fstrudel added a commit to fstrudel/kind that referenced this issue Jul 13, 2021
- When you ask for a kubeconfig file for a named
cluster (i..e not the default "kind"), instead of
getting the (cryptic) message:
"could not locate any control plane nodes",
also return the name of the cluster, and a hint
to what the user should maybe supply (the --name
option).
- Note: Did not find any existing tests for this file, nor which mock
libraries we should use to mock the dependencies?

Testing Done: Create a named cluster and tested the `get kubeconfig` option:
```
$ make build unit verify
$ ./bin/kind create cluster --name mytest

$ ./bin/kind get kubeconfig
ERROR: could not locate any control plane nodes for cluster named 'kind'. Use the --name option to select a different cluster

$ ./bin/kind get kubeconfig --name mytest
apiVersion: v1
clusters:
- cluster:
    certificate-authority-data: LS0tLS[...]
```

Bug Number: kubernetes-sigs#2205
fstrudel added a commit to fstrudel/kind that referenced this issue Jul 13, 2021
- When you ask for a kubeconfig file for a named
cluster (i..e not the default "kind"), instead of
getting the (cryptic) message:
"could not locate any control plane nodes",
also return the name of the cluster, and a hint
to what the user should maybe supply (the --name
option).
- Note: Did not find any existing tests for this file, nor which mock
libraries we should use to mock the dependencies?

Testing Done: Create a named cluster and tested the `get kubeconfig` option:
```
$ make build unit verify
$ ./bin/kind create cluster --name mytest

$ ./bin/kind get kubeconfig
ERROR: could not locate any control plane nodes for cluster named 'kind'. Use the --name option to select a different cluster

$ ./bin/kind get kubeconfig --name mytest
apiVersion: v1
clusters:
- cluster:
    certificate-authority-data: LS0tLS[...]
```

Bug Number: kubernetes-sigs#2205
fstrudel added a commit to fstrudel/kind that referenced this issue Jul 13, 2021
- When you ask for a kubeconfig file for a named
cluster (i..e not the default "kind"), instead of
getting the (cryptic) message:
"could not locate any control plane nodes",
also return the name of the cluster, and a hint
to what the user should maybe supply (the --name
option).
- Note: Did not find any existing tests for this file, nor which mock
libraries we should use to mock the dependencies?

Testing Done: Create a named cluster and tested the `get kubeconfig` option:
```
$ make build unit verify
$ ./bin/kind create cluster --name mytest

$ ./bin/kind get kubeconfig
ERROR: could not locate any control plane nodes for cluster named 'kind'. Use the --name option to select a different cluster

$ ./bin/kind get kubeconfig --name mytest
apiVersion: v1
clusters:
- cluster:
    certificate-authority-data: LS0tLS[...]
```

Bug Number: kubernetes-sigs#2205
@BenTheElder
Copy link
Member

#2364 is merged

coutinhop pushed a commit to coutinhop/kind that referenced this issue Aug 18, 2022
- When you ask for a kubeconfig file for a named
cluster (i..e not the default "kind"), instead of
getting the (cryptic) message:
"could not locate any control plane nodes",
also return the name of the cluster, and a hint
to what the user should maybe supply (the --name
option).
- Note: Did not find any existing tests for this file, nor which mock
libraries we should use to mock the dependencies?

Testing Done: Create a named cluster and tested the `get kubeconfig` option:
```
$ make build unit verify
$ ./bin/kind create cluster --name mytest

$ ./bin/kind get kubeconfig
ERROR: could not locate any control plane nodes for cluster named 'kind'. Use the --name option to select a different cluster

$ ./bin/kind get kubeconfig --name mytest
apiVersion: v1
clusters:
- cluster:
    certificate-authority-data: LS0tLS[...]
```

Bug Number: kubernetes-sigs#2205
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/bug Categorizes issue or PR as related to a bug.
Projects
None yet
Development

No branches or pull requests

2 participants