You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
✋ I have searched the open/closed issues and my issue is not listed.
Please describe your question here
I had Karpenter setup previously and then went through the whole upgrade process and was able to get Karpenter up and running while managing all of my nodes. I noticed that the Name tags all show up as the ip-10-0-0-0.ec2.internal style rather than ip-10-0-0-0.ec2.compute.internal and Karpenter seems to be renaming the nodes to that first style. This is a problem because the IAM role in this module only allows for management of nodes that are named like *.compute.internal or *karpenter*. I manually changed my name tags of my ec2 instances to follow that second style (ec2.compute.internal) and it was able to manage them gracefully.
Am I missing something here? Is there a bug in the module that's causing the issue? I found the other issues that talked about the module following this policy however that policy seems to indicate that you should use a kubernetes.io tag as well as a karpenter.sh tag.
If this truly is a bug I'm happy to write out the full bug report and potentially also fix it in a branch as well.
Provide a link to the example/module related to the question
I added the statement how the linked policy has it and it does seem to work. So I'm wondering if maybe it just needs to be updated in the module?
statement I used:
Please describe your question here
I had Karpenter setup previously and then went through the whole upgrade process and was able to get Karpenter up and running while managing all of my nodes. I noticed that the
Name
tags all show up as theip-10-0-0-0.ec2.internal
style rather thanip-10-0-0-0.ec2.compute.internal
and Karpenter seems to be renaming the nodes to that first style. This is a problem because the IAM role in this module only allows for management of nodes that are named like*.compute.internal
or*karpenter*
. I manually changed my name tags of my ec2 instances to follow that second style (ec2.compute.internal
) and it was able to manage them gracefully.Am I missing something here? Is there a bug in the module that's causing the issue? I found the other issues that talked about the module following this policy however that policy seems to indicate that you should use a kubernetes.io tag as well as a karpenter.sh tag.
If this truly is a bug I'm happy to write out the full bug report and potentially also fix it in a branch as well.
Provide a link to the example/module related to the question
terraform-aws-eks-blueprints-addons/main.tf
Line 2721 in 1b08173
more specifically:
terraform-aws-eks-blueprints-addons/main.tf
Line 2841 in 1b08173
Additional context
The text was updated successfully, but these errors were encountered: