-
-
Notifications
You must be signed in to change notification settings - Fork 4.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat: Ability to manage worker groups as maps #858
feat: Ability to manage worker groups as maps #858
Conversation
Thanks @grzegorzlisowski for opening this PR. Actually, this is something we want to add into this module and totally drop loops with count in favor of for/for_each. With that said, it would be nice to split the feature into a new submodule. We started some discussion at #774. @js-timbirkett is maybe busy actually, so if you can open a PR to address #774 it would be very appreciated. |
aws_auth.tf
Outdated
@@ -38,11 +37,27 @@ locals { | |||
} | |||
] | |||
|
|||
auth_launch_template_worker_roles_ext = [ | |||
for k, v in local.worker_group_launch_template_configurations_ext : { | |||
worker_role_arn = "arn:aws:iam::${data.aws_caller_identity.current.account_id}:role/${var.manage_worker_iam_resources ? aws_iam_instance_profile.workers_launch_template_ext[k].role : data.aws_iam_instance_profile.custom_worker_group_launch_template_iam_instance_profile_ext[k].role_name}" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Need to replace the :aws:
portion of the ARN with :${data.aws_partition.current.partition}:
(same in auth_worker_roles_ext below)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
fixed
data.tf
Outdated
name = each.value["iam_instance_profile_name"] | ||
} | ||
|
||
data "aws_region" "current" {} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Doesn't seem to be used. Remove?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
fixed
9820bb7
to
695030c
Compare
modules/worker_groups/main.tf
Outdated
|
||
dynamic mixed_instances_policy { | ||
iterator = item | ||
for_each = ((lookup(var.worker_groups[each.key], "override_instance_types", null) != null) || (lookup(var.worker_groups[each.key], "on_demand_allocation_strategy", null) != null)) ? list(each.value) : [] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
does var.worker_groups[each.key]
different from each.value
? If not, I'll prefer to use that. It'll more consistant for readability.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Unfortunately, it is different according to my analysis. This is only to avoid changing old logic. See here: https://github.com/terraform-aws-modules/terraform-aws-eks/blob/master/workers_launch_template.tf#L93
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sorry, responded too early.
This is what came as an argument from user: var.worker_groups_launch_template[count.index] while "each.value" represents what is already a merge with module defaults. Which IMHO could change original behaviour
modules/worker_groups/main.tf
Outdated
|
||
dynamic launch_template { | ||
iterator = item | ||
for_each = ((lookup(var.worker_groups[each.key], "override_instance_types", null) != null) || (lookup(var.worker_groups[each.key], "on_demand_allocation_strategy", null) != null)) ? [] : list(each.value) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
does var.worker_groups[each.key]
different from each.value
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is the same as the previous case above
I was wondering if we shouldn't drop this directly. Because for existing worker group, users will have to move resources in the state. So why not move them directly into a map ? For locals, shouldn't we move https://github.com/terraform-aws-modules/terraform-aws-eks/blob/master/local.tf#L34-L100 and into the submodule ? |
My assumption was that users could apply the new module and add new worker groups using maps then migrate K8S resources to new nodes and then remove old groups simply by removing the list. Those locals I have left in the original place as I assumed we still use them in "legacy" worker definitions. If we will just drop old worker_groups definitions then those should be moved. |
695030c
to
d5582b6
Compare
Do we know what is blocking this? |
I presume review and approval? I'm not sure why this "Semantic Pull Request" check is not executing |
we can use worker_groups as a list of maps with unique key name.. instead of map so that it would be more cleaner and we can do ..
|
} | ||
|
||
data "aws_ami" "eks_worker" { | ||
filter { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
can we use for_each and add support for worker_node_version for each workers_group and the default value will be cluster_version if worker_node_version not specified
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What do you mean by 'worker_node_version'?
It could be done but I presume it might make sense to keep the same approach as for |
bf35ebe
to
8d8bd48
Compare
Error sometimes appear randomly without changing anything in code.
|
Issues happen when adding a second group to |
@ZeroDeth |
1cbf970
to
87edcca
Compare
87edcca
to
da785fc
Compare
Thanks @grzegorzlisowski for working on this. We have an terraform-aws-modules working session this friday. We'll discuss about the direction we want to take about this feature. We'll come back to you pretty soon. |
da785fc
to
c29c612
Compare
Ok. Since I use Terraform via an in-house service, I think I can't modify tfstate. So probably I need to create a temp. fork supporting both mechanisms, if at all possible. Or switch to managed nodes, which we're considering anyway. Otoh: Migrating all of the current resources locks us out of certain clean-up & simplifications in the old worker group implementation. Moving all code based on maps into a submodule could be a good moment to do so. I'm mainly thinking of sticking to the EKS-created security_group and ditching the now redundant sg created by this module. See #1196 (comment) and my follow up comment. @barryib What's your thought on this? I know this has the risk of re-activiting long done discussions, which can slow down momentum. That's obviously not my intention so I'm fine with any answer. |
a429b61
to
aa8f722
Compare
In the additional commit, I have added the option to migrate from old (legacy) worker groups to the new one. The module migrates existing variables (lists) appropriately: worker_groups -> worker_groups_legacy This way existing users could upgrade to the new module version and slowly migrate compute to the new approach with the map via: worker_groups Not sure if this approach is acceptable... |
This PR has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. |
@barryib is this something that is likely to happen soon? |
af59eb0
to
f31281e
Compare
- Create separate defaults for node groups - Workers IAM management left outside of module as both node_group and worker_groups uses them - Add option to migrate to worker group module
f31281e
to
0ecfa80
Compare
Any chance for a review and merge? |
This PR has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. |
This issue has been resolved in version 18.0.0 🎉 |
I'm going to lock this pull request because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues. If you have found a problem that seems related to this change, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further. |
PR o'clock
Description
Resolves #774
The change is intended to improve the ability to manage worker groups using maps. Which should allow to more flexibly add/remove worker groups (improve this: https://github.com/terraform-aws-modules/terraform-aws-eks/blob/master/docs/faq.md#how-do-i-safely-remove-old-worker-groups).
The change includes suggested in #774 changes which include:
Change to the worker groups definitions:
Checklist