Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

added: support for adding labels from the config file itself #1926

Conversation

yashvardhan-kukreja
Copy link
Contributor

@yashvardhan-kukreja yashvardhan-kukreja commented Nov 13, 2020

closes #1574

My approach:
So, the labelling of the nodes is going to be one of the actions (actionsToRun) when cluster creation happens in the code.
Now, I have not coded a dedicated action for "just the labelling of nodes".
I have rather created a file postcreationconfiguration.go which will consist of all the functions which are supposed to occur after the cluster gets created. (like labelling in our case)

And in the file postcreationconfiguration.go, I have written a labelNodes function which will label the nodes from the config file.

Why did I do it?
Because, say, in the future, there is another requirement like adding taints or tolerations (or some other configuration, which usually in real life are done after cluster is created) through the config file itself.
Then, we can write functions for those things in the postcreationconfigurations.go itself instead of creating dedicated actions for them. That's because these things like adding taints or tolerations are usually done after the cluster is already created.

What's happening behind the scenes?
So, basically, at every iteration, if the labels are being applied to a control-plane node, then, that control-plane-node itself will self-apply those labels to itself.
But if the labels are being applied to a worker node, then, one of the control plane nodes will apply those labels to that worker node.

Usage from user's side:

kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
nodes:
- role: control-plane
- role: control-plane
  labels:
    name: master
    index: 2
- role: worker
  labels:
    name: worker
    tier: app
- role: worker

Output: (Look over the end of every labels row)

NAME                   STATUS  ROLES   LABELS
master-control-plane   Ready   master  beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,index=2,kubernetes.io/arch=amd64,kubernetes.io/hostname=master-control-plane,kubernetes.io/os=linux,name=master,node-role.kubernetes.io/master=
master-control-plane2  Ready   master  beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=master-control-plane2,kubernetes.io/os=linux,node-role.kubernetes.io/master=
master-worker          Ready   <none>  beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=master-worker,kubernetes.io/os=linux
master-worker2         Ready   <none>  beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=master-worker2,kubernetes.io/os=linux,name=worker,tier=app

@k8s-ci-robot k8s-ci-robot added the cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. label Nov 13, 2020
@k8s-ci-robot
Copy link
Contributor

Hi @yashvardhan-kukreja. Thanks for your PR.

I'm waiting for a kubernetes-sigs member to verify that this patch is reasonable to test. If it is, they should reply with /ok-to-test on its own line. Until that is done, I will not automatically test new commits in this PR, but the usual testing commands by org members will still work. Regular contributors should join the org to skip this step.

Once the patch is verified, the new status will be reflected by the ok-to-test label.

I understand the commands that are listed here.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@k8s-ci-robot k8s-ci-robot added the needs-ok-to-test Indicates a PR that requires an org member to verify it is safe to test. label Nov 13, 2020
@k8s-ci-robot
Copy link
Contributor

[APPROVALNOTIFIER] This PR is NOT APPROVED

This pull-request has been approved by: yashvardhan-kukreja
To complete the pull request process, please assign munnerz after the PR has been reviewed.
You can assign the PR to them by writing /assign @munnerz in a comment when ready.

The full list of commands accepted by this bot can be found here.

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@k8s-ci-robot k8s-ci-robot added the size/L Denotes a PR that changes 100-499 lines, ignoring generated files. label Nov 13, 2020
@yashvardhan-kukreja yashvardhan-kukreja force-pushed the issue-1574/support-for-node-labels-to-config branch 4 times, most recently from d5a0a38 to b7cf15a Compare November 13, 2020 16:16
@yashvardhan-kukreja yashvardhan-kukreja force-pushed the issue-1574/support-for-node-labels-to-config branch from b7cf15a to 54fa55b Compare November 13, 2020 16:18
@BenTheElder
Copy link
Member

Why wouldn't we have kubelet apply these?

@yashvardhan-kukreja
Copy link
Contributor Author

yashvardhan-kukreja commented Nov 13, 2020

Why wouldn't we have kubelet apply these?

Labels can be added to control plane nodes as well right in the config file.
then, how would we go on applying labels to control plane nodes?
If we go with kubelet's way of applying node labels, then, we'd have to code different separate logics for adding labels to worker nodes (using kubelet at the time of registering nodes) and adding labels to control-plane nodes (using kubectl or something else).

Also, here, in the code, the labels are being created as a post-cluster-creation step where all the control-plane and worker nodes are being labelled together with the same function/logic. Hence, kubectl seemed the cleaner and more straightforward approach to implement the label creation.

That's my perspective @BenTheElder , what do you think

@BenTheElder
Copy link
Member

If we go with kubelet's way of applying node labels, then, we'd have to code different separate logics for adding labels to worker nodes (using kubelet at the time of registering nodes) and adding labels to control-plane nodes (using kubectl or something else).

Huh? All of the nodes run kubelet and are registered. The labels are attached to the node at registration time by kubelet. The control-plane kubelet registers as well ...

Also, here, in the code, the labels are being created as a post-cluster-creation step where all the control-plane and worker nodes are being labelled together with the same function/logic. Hence, kubectl seemed the cleaner and more straightforward approach to implement the label creation.

kind is not a general command runner and we already have a (bad) abstraction for various stages of doing things.

@yashvardhan-kukreja
Copy link
Contributor Author

If we go with kubelet's way of applying node labels, then, we'd have to code different separate logics for adding labels to worker nodes (using kubelet at the time of registering nodes) and adding labels to control-plane nodes (using kubectl or something else).

Huh? All of the nodes run kubelet and are registered. The labels are attached to the node at registration time by kubelet. The control-plane kubelet registers as well ...

Also, here, in the code, the labels are being created as a post-cluster-creation step where all the control-plane and worker nodes are being labelled together with the same function/logic. Hence, kubectl seemed the cleaner and more straightforward approach to implement the label creation.

kind is not a general command runner and we already have a (bad) abstraction for various stages of doing things.

Ohh alright. My bad. Will hop into implementing with kubelet.

@yashvardhan-kukreja
Copy link
Contributor Author

yashvardhan-kukreja commented Nov 14, 2020

@BenTheElder , just a small thought.

The cluster creation is happening through kubeadm in the code. And that's why , the code is not executing any kubelet commands directly while creating the cluster. (kubeadm cli is doing that, behind the scenes).
But the way you suggested; "adding labels to nodes at the time of node creation itself via kubelet" (using the kubelet --node-labels the way I see it). That would require the code to execute kubelet commands instead of/alongside kubeadm commands.

Wouldn't that be messy or at least rough to implement?

Just a thought. Please correct me if I am wrong.

@yashvardhan-kukreja
Copy link
Contributor Author

yashvardhan-kukreja commented Nov 14, 2020

Also, I don't know if there's anyway to do labelling of nodes via kubeadm but still, say, if there's a way to do so. Then, we would have to code the labelling of nodes at two places. One in the kubeadminit.go action (for dealing with the labelling of control-plane nodes). and other in the kubeadmjoin.go action (for dealing with the labelling of worker nodes.)

But in my approach in the current PR, it's all happening in one place in the postcreationconfiguration.go which seems cleaner and more straightforward.
Any thoughts regarding that?

@yashvardhan-kukreja
Copy link
Contributor Author

I hope that makes sense.
(please correct me if you think I am getting wrong somewhere)

@BenTheElder
Copy link
Member

Also, I don't know if there's anyway to do labelling of nodes via kubeadm but still, say, if there's a way to do so. Then, we would have to code the labelling of nodes at two places. One in the kubeadminit.go action (for dealing with the labelling of control-plane nodes). and other in the kubeadmjoin.go action (for dealing with the labelling of worker nodes.)

No, it would be done via the kubeadm config, by adjusting kubelet options.

@k8s-ci-robot k8s-ci-robot added size/M Denotes a PR that changes 30-99 lines, ignoring generated files. and removed size/L Denotes a PR that changes 100-499 lines, ignoring generated files. labels Nov 18, 2020
@yashvardhan-kukreja yashvardhan-kukreja force-pushed the issue-1574/support-for-node-labels-to-config branch from 6743f41 to c8e2546 Compare November 18, 2020 07:15
@yashvardhan-kukreja yashvardhan-kukreja force-pushed the issue-1574/support-for-node-labels-to-config branch from c8e2546 to 4b5c1bb Compare November 18, 2020 07:23
@yashvardhan-kukreja
Copy link
Contributor Author

@BenTheElder I removed all that 'postcreationconfiguration' stuff and coded the addition of labels at the time of node registration via kubeadm's kubeletExtraArgs.

@yashvardhan-kukreja
Copy link
Contributor Author

And I tested it with the following configuration and it's working perfectly fine:

kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
nodes:
- role: control-plane
  labels:
    name: sample-controller
    age: 2
- role: control-plane
  labels:
    name: sample-controller-2
    index: 3
- role: worker
  labels:
    name: sample-worker
    worker-age: 3
- role: worker
- role: worker

Output

► kubectl get nodes --show-labels
NAME                  STATUS   ROLES    AGE   VERSION   LABELS
kind-control-plane    Ready    master   25m   v1.19.3   age=2,beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=kind-control-plane,kubernetes.io/os=linux,name=sample-controller,node-role.kubernetes.io/master=
kind-control-plane2   Ready    master   25m   v1.19.3   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,index=3,kubernetes.io/arch=amd64,kubernetes.io/hostname=kind-control-plane2,kubernetes.io/os=linux,name=sample-controller-2,node-role.kubernetes.io/master=
kind-worker           Ready    <none>   23m   v1.19.3   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=kind-worker,kubernetes.io/os=linux
kind-worker2          Ready    <none>   23m   v1.19.3   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=kind-worker2,kubernetes.io/os=linux,name=sample-worker,worker-age=3
kind-worker3          Ready    <none>   23m   v1.19.3   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=kind-worker3,kubernetes.io/os=linux

@yashvardhan-kukreja
Copy link
Contributor Author

@BenTheElder @neolit123 @krzyzacy Please review this PR

@BenTheElder
Copy link
Member

BenTheElder commented Dec 14, 2020

sorry all time is currently consumed on bug-fixes (or non kind related work, none of the maintainers work full time on kind), with some small bits of time granted to documentation (which helps save everyone's time).

features and releases are on hold, in particular #1969 needs to be resolved.

I will get to this ASAP, but I need to prioritize these still.

EDIT: previously I was personally unavailable for a bit for any work for personal reasons, since I've been back I've been resolving those.

@yashvardhan-kukreja
Copy link
Contributor Author

yashvardhan-kukreja commented Dec 15, 2020

features and releases are on hold, in particular #1969 needs to be resolved.

Regarding this, this has already received an lgtm from Lubomir :)

sorry all time is currently consumed on bug-fixes (or non kind related work, none of the maintainers work full time on kind), with some small bits of time granted to documentation (which helps save everyone's time).

features and releases are on hold, in particular #1969 needs to be resolved.

I will get to this ASAP, but I need to prioritize these still.

EDIT: previously I was personally unavailable for a bit for any work for personal reasons, since I've been back I've been resolving those.

Sure @BenTheElder . No worries. Please take your time.
Thanks a lot for the heads up though 😄

@BenTheElder BenTheElder removed the needs-ok-to-test Indicates a PR that requires an org member to verify it is safe to test. label Jan 22, 2021
@BenTheElder BenTheElder added this to the v0.11.0 milestone Jan 22, 2021
@BenTheElder
Copy link
Member

#2023

@BenTheElder
Copy link
Member

sorry for the incredibly long delay. I am closing out our backlog prior to #2120 and since the last release we've expanded the maintainer group to help avoid this.

I'd still like to move this to v1alpha5, but as that doesn't exist yet, merging as v1alpha4. if we get v1alpha5 ready in time for 1.21, we'll migrate this over. if not we'll just leave it as a v1alpha4 addition.

thank you for working on this.

@BenTheElder BenTheElder merged commit efcd23b into kubernetes-sigs:master Mar 15, 2021
@BenTheElder BenTheElder added approved Indicates a PR has been approved by an approver from all required OWNERS files. lgtm "Looks good to me", indicates that a PR is ready to be merged. labels Mar 15, 2021
@k8s-ci-robot
Copy link
Contributor

@yashvardhan-kukreja: The following test failed, say /retest to rerun all failed tests:

Test name Commit Details Rerun command
pull-kind-e2e-kubernetes 4b5c1bb link /test pull-kind-e2e-kubernetes

Full PR test history. Your PR dashboard. Please help us cut down on flakes by linking to an open issue when you hit one in your PR.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here.

Benny-Git added a commit to Benny-Git/kind that referenced this pull request Aug 13, 2022
@wallrj
Copy link

wallrj commented Feb 28, 2024

Some node labels in the config file (such as node-role.kubernetes.io/worker) cause kind create cluster to fail with an unfriendly error
See:

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
approved Indicates a PR has been approved by an approver from all required OWNERS files. cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. lgtm "Looks good to me", indicates that a PR is ready to be merged. size/M Denotes a PR that changes 30-99 lines, ignoring generated files.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Support node labels within config files
4 participants