-
-
Notifications
You must be signed in to change notification settings - Fork 4.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Be able to define per ASGs tags #89
Comments
I think that sounds reasonable. You can add an extra variable into |
This will be a lot easier to do when Terraform 0.12 (with a better type system, e.g. nested maps, and for-loop support) is out. |
Had a try, since |
No worries. Maybe wait until 0.12 is released then. |
In my all mighty laziness this is how I made this work. For clarity on why this works, take a look at kubelet.service. This will create a new line for node labels in kubelet.service which will populate the label on the nodes. You can ignore the nfs-utils part.
|
@shamsalmon, I don't understand how you can tag a node with your solution, can you please clarify? thanks! |
@max-rocket-internet , per the use case of being able to auto scale down to zero nodes , the cluster-autoscaler requires a certain tag be present on the ASG depending on the node label , e.g.:
I have an approach that solves only the above use case in a pr within our fork: What are your thoughts on that approach ? I understand that we would like to wait on Terraform 0.12 for the more general use case of allowing arbitrary tags, but Terraform 12 realistically won't be generally available till end of Q1 and adoption to use it will probably take longer. |
That is very specific for you though and, as you mentioned, it doesn't allow just a map of extra tags to be applied per ASG. Hmmmmm How about we add the option now and it will not support merging. So if you want per ASG tags, then you just need to supply ALL the tags, i.e. including |
Well, since this module was already putting in cluster-autoscaler specific tags for its auto discovery of ASGs feature, the changes above is a continuation of that to allow the ASGs to scale down to zero. But I am in support of the option you mention: either working with the defaults that are currently provided or having to supply all of the tags for each ASG. |
@Chili-Man this is somewhat tangential but I am having difficulty understanding the sparse documentation around this - does applying that tag mean that CA will scale the tagged ASG to zero or that the capability will now exist to scale it to zero? For example, if I have blue/green worker ASGs, the tag should be applied to both and setting the ASG size is what will allow CA to scale it to zero? The most confusing part for me is:
So does this mean that it works without tags using autoDiscovery? |
I have been testing scaling an ASG (one of two for blue/green) to zero with cluster-autoscaler and EKS and have not been able to get it to work, even with adding the appropriate tags and using nodeGroups and a nodeSelector. I have opened an issue here to track it. |
@max-rocket-internet I am just having a go at solving this one and wanted some input, the easiest way I found to solve it was by adding two new variables:
They would basically be defined as a list of lists containing the maps for the tags, which would be in index order of your defined worker_groups.
I am no Terraform expert and I attempted this in a few other ways but was unable to get it working by having it part of the worker_groups maps, do you have any thoughts before I go getting a PR together? here is what I have so far which appears to be working for my uses master...stefansedich:worker-group-tags |
@stefansedich I think your work idea here looks fine! Please make a PR 😃 |
I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues. If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further. |
I have issues
Tags currently are too prescriptive. I have a use case where I need to tag different ASGs with different tags. Im using the ability to push these tags down to node labels and taints to drive different workloads on my kubernetes cluster. Atm, it seems that I can only define tags once on the top level EKS module and these tags are used globally throughout. I would like to be able to define tags per ASGs. A sensible place to provide these seems to be the list of worker_groups maps.
I'm submitting a...
What is the current behavior?
Tags are defined once in var.tags and used throughout both to tag the cluster resources itself, as well as, all ASGs that are created.
If this is a bug, how to reproduce? Please include a code sample if relevvant.
What's the expected behavior?
Should be able to provide tags in the list of worker_groups maps and these should be used to tagging the corresponding ASGs that are created for each respective worker group. If the tags are not set, they can default to the existing top-level global tag variable that is used.
Are you able to fix this problem and submit a PR? Link here if you have already.
I can submit a PR if this is reasonable.
Environment details
Any other relevant info
The text was updated successfully, but these errors were encountered: