Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Karpenter name tags are incorrect? #309

Closed
1 task done
henryjarend opened this issue Nov 9, 2023 · 1 comment · Fixed by #315
Closed
1 task done

Karpenter name tags are incorrect? #309

henryjarend opened this issue Nov 9, 2023 · 1 comment · Fixed by #315
Milestone

Comments

@henryjarend
Copy link

  • ✋ I have searched the open/closed issues and my issue is not listed.

Please describe your question here

I had Karpenter setup previously and then went through the whole upgrade process and was able to get Karpenter up and running while managing all of my nodes. I noticed that the Name tags all show up as the ip-10-0-0-0.ec2.internal style rather than ip-10-0-0-0.ec2.compute.internal and Karpenter seems to be renaming the nodes to that first style. This is a problem because the IAM role in this module only allows for management of nodes that are named like *.compute.internal or *karpenter*. I manually changed my name tags of my ec2 instances to follow that second style (ec2.compute.internal) and it was able to manage them gracefully.

Am I missing something here? Is there a bug in the module that's causing the issue? I found the other issues that talked about the module following this policy however that policy seems to indicate that you should use a kubernetes.io tag as well as a karpenter.sh tag.

If this truly is a bug I'm happy to write out the full bug report and potentially also fix it in a branch as well.

Provide a link to the example/module related to the question


more specifically:
values = ["*karpenter*", "*compute.internal"]

Additional context

@henryjarend
Copy link
Author

I added the statement how the linked policy has it and it does seem to work. So I'm wondering if maybe it just needs to be updated in the module?
statement I used:

      "Sid": "AllowScopedDeletion",
      "Effect": "Allow",
      "Resource": [
        "arn:aws:ec2:us-east-1:*:instance/*",
        "arn:aws:ec2:us-east-1:*:launch-template/*"
      ],
      "Action": [
        "ec2:TerminateInstances",
        "ec2:DeleteLaunchTemplate"
      ],
      "Condition": {
        "StringEquals": {
          "aws:ResourceTag/kubernetes.io/cluster/production": "owned"
        },
        "StringLike": {
          "aws:ResourceTag/karpenter.sh/nodepool": "*"
        }
      }
    },```

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
2 participants