-
-
Notifications
You must be signed in to change notification settings - Fork 4.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Best way to remove asg without rebuilding all of them. #476
Comments
I don't think you can do that actually. But since Terraform 0.12.6, we can now use More info here: |
Maybe this will help you as workaround for now hashicorp/terraform#14275 (comment) |
Cool thanks!. Are there any plans to move to |
we have an eks cluster with five worker groups (asg's). i wanted to upgrade them from the 12.7 ami to the 12.10 ami. based on https://docs.aws.amazon.com/eks/latest/userguide/migrate-stack.html, i tried these steps:
because the worker_groups logic is index-based, a bunch of chaos ensues and the new asg's actually end up deleted. sounds like using |
I would like to work on this, but it sounds like it will force everyone to move theirs resources in states. I don't know how this can be painful for users. Any thoughts ? @max-rocket-internet @dpiddockcmp ? |
i had the same concerns about existing tf state, but fortunately our team is in a spot where we can rebuild all of our envs. so i forked somewhere in between 6.0.1 and 6.0.2 and implemented using a map of maps for worker_group_launch_templates instead of a list. our team also had to split the eks control plane from the worker groups as flipping bits for the eks api endpoint access vars was causing all the asg's to roll (i believe due to naming and kubeconfig/auth-map.rendered dependencies). here's an example of how our eks-worker-groups module looks:
i am planning on getting a public repo up that can at least provide a starting point if someone wants to see how i handled all the for_each's. we also made a couple other adjustments that would make submitting a PR not really possible. but really want to at least contribute the concept back. |
oh yeah forgot to mention that upgrading the cluster via the steps i outlined above works really nicely. |
There's no way around it, moving to a map of maps and I've done it to a few unrelated plans as part of our 0.12 upgrade efforts. You can't simply use A lot of short term pain for all. Would make life much easier for people who change their |
Hmmm, sounds painful.
Is it not possible to use |
Unfortunately no @max-rocket-internet . Take a look: First, I tried to just move it: $ terraform state mv module.eks.aws_autoscaling_group.workers[1] module.eks.aws_autoscaling_group.workers[0]
Acquiring state lock. This may take a few moments...
Error: Invalid target address
Cannot move to module.eks.aws_autoscaling_group.workers[0]:
there is already a resource instance at that address in the current state. Then, I tried to remove the resource from index 0 and try to move again: $ terraform state rm module.sc-eks.module.eks.aws_autoscaling_group.workers[0]
Removed module.sc-eks.module.eks.aws_autoscaling_group.workers[0]
Successfully removed 1 resource instance(s).
$ terraform state mv module.eks.aws_autoscaling_group.workers[1] module.eks.aws_autoscaling_group.workers[0]
Move "module.eks.aws_autoscaling_group.workers[1]" to "module.eks.aws_autoscaling_group.workers[0]"
Error: Invalid target address
Cannot move to module.eks.aws_autoscaling_group.workers[0]:
module.eks.aws_autoscaling_group.workers does not exist in the
current state. |
I was able to workaround by moving the index 0 to index 2 and then move index 1 to 0. After that, I just ran the apply command again to destroy the old workers. |
I wanted to try and work on this, I started with the worker_launch_template and iterate from here. If someone have the time I'd like some feedback/help here I'm having issues with the
Despite using a |
Looking into what's been done for the node_group the best way would be to merge everything beforehand from what I understand here: https://github.com/terraform-aws-modules/terraform-aws-eks/blob/master/modules/node_groups/locals.tf |
What if we support both in parallel? |
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. |
This issue has been automatically closed because it has not had recent activity since being marked as stale. |
Any updates to this issue? I would like to have Several days ago I wanted to delete groups 0 and 1. After considering the plan Terraform gave me I just scaled them to 0 not to rebuild all other groups :) |
To treat the |
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. |
This issue has been automatically closed because it has not had recent activity since being marked as stale. |
This is still a problem, can we please keep this issue open? |
Stale Bump :) |
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. |
This issue has been automatically closed because it has not had recent activity since being marked as stale. |
I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues. If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further. |
When I remove
group-01
it changes the indexes and forces a rebuild ofgroup-02
andgroup-03
. Is there a recommended way to avoid that?The text was updated successfully, but these errors were encountered: