-
Notifications
You must be signed in to change notification settings - Fork 1.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
RFC: CLI UX Improvements #850
Comments
Most of these changes are actually pretty simple to implement, pending feedback I hope we can land these for |
I am very much *1 on KIND_NAME idea I think maybe KIND_CLUSTER_NAME keeps us from conflicting with image name in the future. A word of caution on the Drop Export idea we are going to constantly have conflicts on naming as folks delete old kind clusters and create new ones. The conflicts are across all three structs in the kubeconfig. User, cluster and context. Given that we may want to determine some "correct" behaviour for overwriting (like backing up the original to .kube/config.orig) What's the behavior if the cluster is deleted? Overall I am in favor! |
I thought about this, but I think it should match the flag for both, so it would be
We should ensure that kind clusters only conflict with kind clusters with the same contexts:
- context:
cluster: gke_gob-prow_us-west1-a_prow
<snipped details>
users:
- name: gke_gob-prow_us-west1-a_prow
user:
<snipped details>
We should record the file we wrote to in the node labels maybe? Alternatively we just check the current For the file we locate, we then drop the keys that match that kind cluster. If there's no keys left we may delete the file. It is probably safest to just drop those keys though and I would lean that way initially unless persuaded otherwise (possibly prior art from other tools?)
This is a good question, we should inspect what the other common cluster provisioners do here!
I don't see why not, as long as we do something about those keys (see above) ? |
i have similar concerns to @mauilion about the export change. if KUBECONFIG points to a non-kind cluster, how will the merge play with the fact that KUBECONFIG supports a list of paths. should the merge only happen in the first path? if KUBECONFIG does not point to kind config or if empty, should a message about |
Good point on env var naming KIND_IMAGE Is better :) I'm okay with merging the two. There is another possibility that we could document for users worried about merge: The KUBECONFIG env bar can take multiple paths and they will be merged (with the first kubeconfig "winning" and holding the active context selection. This first kubeconfig is also the only writeable one. The rest are readonly. If the user sets the var to export KUBECONFIG=/path/to/kind/kubeconfig:~/.kube/config Then we'd populate the kind kubeconfig but they'd still be able to switch context to the original config. If KUBECONFIG gets unset then the original unmodified is the active config. I think we can document this behavior effectively. |
EDIT: ninjaed by @mauilion! 😅
Why not? The other tools do. KUBECONFIG is designed to support multiple clusters.
Yes it should write to the first entry in the list.
We need to print a message about the config being exported but the emptiness doesn't seem relevant? We may create the file if it doesn't exist, and client-go specifies the default location if not set. Kind already looks up and writes to the default directory.
We should tell the user that we wrote the KUBECONFIG and to use kubectl or link to some usage docs still, but instead of with the command we say something roughly like: We could keep the config path command accurately if we record the exported path to in cluster state. We probably don't need to though long-term. |
during testing various things, my use case (which is certainly not that important), is that i create kubeadm node/cluster on the host and point KUBECONFIG to that. then from time to time i also create a kind cluster on the same host. i'd consider the merge of the kind and kubeadm configs in such a case as undesired. a similar problem can happen if the user has some KUBECONFIG on their host that controls a production cluster and then they decide to test something with kind. this will inject kind cluster data in the same production KUBECONFIG.
what if
or perhaps |
You can tell kind which file to use both by KUBECONFIG or --kubeconfig when creating the cluster. That does not necessitate unsetting. If you want seperate files you either pass --kubeconfig or you modify KUBECONFIG. You can do as @mauilion pointed out above to continue to have access to the prod cluster in the same shell but write kind to another file by specifying it as As noted above, many other tools do this including minikube.
A) The production config needs to be recoverable anyhow. (Not that we would break it, but still!) Again, this is not really kind specific. Users already expect this from other tools. Multi-file config management seems less common than context management.
Nothing is being "stomped". Whatever file is selected we'll merge with. If the file doesn't exist we'll create it.
Right. We should try to follow prior art in this message as well. I'm not sure what people call these files other than the environment variable name. Will also cross reference the Kubernetes docs. |
bad wording.
no strong objections on my side, but would be interesting to get more feedback from kind users. |
Looks good... as long as setting edit: How are you going to implement the |
/assign |
This is basically done without a client-go dependency, it's a little bit more code to get all the nice things, but not too substantial, and the behavior and binary size are much nicer than before 🙃 Will PR tomorrow, need to do some minor cleanup. |
dug deep into expected behavior and replicated it #1029 (comment) client-go is not necessary for this, however we have closely mimiced how it does "locking" and the documented expectations from kubectl regarding selecting kubeconfig, which are both relatively simple. the context set / unset behavior seems to be pretty universally the same so we've mimiced that as well. will look into the env in a future PR, it looks like the pattern from kops in this regard is |
#1061 brings back the kubeconfig-path command with a detailed (stderr) warning |
@BenTheElder Will it be still possible to get a config with the internal (docker) ip's? ( |
@Ilyes512 yes, in fact |
You can also now |
will track |
Recently we landed some changes to revamp the logging / debugging experience and to ensure that status messages can be suppressed (
-q
/--quiet
). However, I think kind still has plenty to improve on this front.In particular I propose the following changes to improve the usability of
kind
, some of which may be slightly breaking:Drop
export KUBECONFIG=$(kind get kubeconfig-path)
The
kind create cluster
command should accept a--kubeconfig
flag and the environment variableKUBECONFIG
to determine which file to use in for the export cluster kubeconfig. We should merge the generated cluster config into this file and set the default context to point to the newly created cluster.Existing Feature Requests
PROS:
kops
,minikube
,gcloud
(GKE), ...kind create cluster
is now the entire creation flow)CONS:
KUBECONFIG
beforekind create cluster
replace
using ingo.mod
for library consumers, and avoiding depending on newer client-go features. Worst case we re-implement or fork the bits we needKUBECONFIG
with your preferred path before callingkind create cluster
instead of after. If you mimic the old path behavior this will work with old and new kind versions. Some users already hardcoded these paths in various places.kind get kubeconfig-path
for a migration period but accept the same inputs and point to the path we'd choose inkind create cluster
v0.6
will already more or less break the CLI slightly to implement the revamped loggingSupport
KIND_NAME
environment variableWe should allow re-using entire kind scripts without the
--name
flag for each command by also accepting aKIND_NAME
environment variable to specify the cluster name if the--name
flag is not specified.Existing Feature Requests
direnv
support for--name
, which seems like a somewhat popular approach.PROS:
KIND_*
environment variables for environment options as opposed to cluster configuration.KIND_EXPERIMENTAL_NODE_BACKEND=podman
CONS:
The text was updated successfully, but these errors were encountered: