-
Notifications
You must be signed in to change notification settings - Fork 1.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
kind get kubeconfig
gives unclear error when --name is missing
#2205
Comments
you need we should improve the error message to note which cluster it was looking for. |
kind get kubeconfig
fails for custom named clusterskind get kubeconfig
gives unclear error when --name is missing
Ok thanks! This obviously was my mistake. Because I need this for kubefed, it would be nice if I can get ine kubeconfig for multiple kind clusters, as this is needed for kubefedctl. Can I specify the —name flag multiple times? I would like to ask, why you desided to implement a flag for the name, instead of a positional argument, like Thanks a lot for your impressive work btw! |
You have to invoke the command multiple times for this currently. We could add
In retrospect, I agree, but I'm not sure it's worth the breaking change for all the existing users. If I didn't fear how big of a breaking change it would be, I would also like if you could set your current context in kubeconfig and have that become the default cluster, but I think that's perhaps too magical and there are so many scripts out there using
Thank you 😅 |
Invoking this multiple times would need to merge them later on for the kubefedctl scenario, because you need to specify I think
I'm very aware that this would break a lot of users, and may be not worth the change. For the |
If you use export instead of get kind will handle the merging for you.
It is getting multiple files and --all versus --name would be our first and only disjoint / banned flag combination. We've added a similar plurality before with
Having a flag you can never not specify is pretty terrible CLI UX. We always need an image. I don't think it's worth breaking everyone at this point. There's a happy path for a single cluster and a marginally more annoying path for multiple. In addition to the consistent flag you can experimentally use the environment variable instead. |
I agree. I was suggesting it, because kubectl also have a big number of required flags, like
|
we should still update that error message to say something like could not find any control-plane nodes for cluster %q clusterName |
- When you ask for a kubeconfig file for a named cluster (i..e not the default "kind"), instead of getting the (cryptic) message: "could not locate any control plane nodes", also return the name of the cluster, and a hint to what the user should maybe supply (the --name option). - Note: Did not find any existing tests for this file, nor which mock libraries we should use to mock the dependencies? Testing Done: Create a named cluster and tested the `get kubeconfig` option: ``` $ make build unit verify $ ./bin/kind create cluster --name mytest $ ./bin/kind get kubeconfig ERROR: could not locate any control plane nodes for cluster named 'kind'. Use the --name option to select a different cluster $ ./bin/kind get kubeconfig --name mytest apiVersion: v1 clusters: - cluster: certificate-authority-data: LS0tLS[...] ``` Bug Number: kubernetes-sigs#2205
- When you ask for a kubeconfig file for a named cluster (i..e not the default "kind"), instead of getting the (cryptic) message: "could not locate any control plane nodes", also return the name of the cluster, and a hint to what the user should maybe supply (the --name option). - Note: Did not find any existing tests for this file, nor which mock libraries we should use to mock the dependencies? Testing Done: Create a named cluster and tested the `get kubeconfig` option: ``` $ make build unit verify $ ./bin/kind create cluster --name mytest $ ./bin/kind get kubeconfig ERROR: could not locate any control plane nodes for cluster named 'kind'. Use the --name option to select a different cluster $ ./bin/kind get kubeconfig --name mytest apiVersion: v1 clusters: - cluster: certificate-authority-data: LS0tLS[...] ``` Bug Number: kubernetes-sigs#2205
- When you ask for a kubeconfig file for a named cluster (i..e not the default "kind"), instead of getting the (cryptic) message: "could not locate any control plane nodes", also return the name of the cluster, and a hint to what the user should maybe supply (the --name option). - Note: Did not find any existing tests for this file, nor which mock libraries we should use to mock the dependencies? Testing Done: Create a named cluster and tested the `get kubeconfig` option: ``` $ make build unit verify $ ./bin/kind create cluster --name mytest $ ./bin/kind get kubeconfig ERROR: could not locate any control plane nodes for cluster named 'kind'. Use the --name option to select a different cluster $ ./bin/kind get kubeconfig --name mytest apiVersion: v1 clusters: - cluster: certificate-authority-data: LS0tLS[...] ``` Bug Number: kubernetes-sigs#2205
#2364 is merged |
- When you ask for a kubeconfig file for a named cluster (i..e not the default "kind"), instead of getting the (cryptic) message: "could not locate any control plane nodes", also return the name of the cluster, and a hint to what the user should maybe supply (the --name option). - Note: Did not find any existing tests for this file, nor which mock libraries we should use to mock the dependencies? Testing Done: Create a named cluster and tested the `get kubeconfig` option: ``` $ make build unit verify $ ./bin/kind create cluster --name mytest $ ./bin/kind get kubeconfig ERROR: could not locate any control plane nodes for cluster named 'kind'. Use the --name option to select a different cluster $ ./bin/kind get kubeconfig --name mytest apiVersion: v1 clusters: - cluster: certificate-authority-data: LS0tLS[...] ``` Bug Number: kubernetes-sigs#2205
What happened:
I'm not able to get the (internal or external) kubeconfig for explicitly named clusters.
I'm not sure if this is a bug, or if I'm doing something wrong, but at least I don't expect it to fail this way.
Little Background:
kind get kubeconfig --internal
command,to get https://containername:6443 endpoints for the kubeconfig.
What you expected to happen:
I expected the kubeconfig to be retrieved, regardless if I use an explicit name, or if kind uses it's default name.
How to reproduce it (as minimally and precisely as possible):
Use the script above.
Anything else we need to know?:
Environment:
kind version
):kind v0.10.0 go1.15.7 linux/amd64
kubectl version
):Client Version: version.Info{Major:"1", Minor:"21", GitVersion:"v1.21.0", GitCommit:"cb303e613a121a29364f75cc67d3d580833a7479", GitTreeState:"clean", BuildDate:"2021-04-08T16:31:21Z", GoVersion:"go1.16.1", Compiler:"gc", Platform:"linux/amd64"}
docker info
):The text was updated successfully, but these errors were encountered: