Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Bug] Access Entries for Self-Managed Nodes not created by eksctl not working #7446

Closed
cmartintlh opened this issue Dec 27, 2023 · 8 comments · Fixed by #7522
Closed

[Bug] Access Entries for Self-Managed Nodes not created by eksctl not working #7446

cmartintlh opened this issue Dec 27, 2023 · 8 comments · Fixed by #7522
Labels
kind/bug priority/important-longterm Important over the long term, but may not be currently staffed and/or may require multiple releases

Comments

@cmartintlh
Copy link

What were you trying to accomplish?

Create an access entry for an EC2_LINUX IAM role for Karpenter to use.

What happened?

When attempting to create an Access Entry for an EC2 Linux self-managed node group used by Karpenter via the config file, eksctl is expecting a policy ARN to be passed. The documentation for creating these types of access entries seems to be lacking, or perhaps eksctl currently only supports creation of these entries for self-managed node groups that eksctl creates (as opposed to compatibility for external self-managed node groups, such as Karpenter nodes).

How to reproduce it?

First tried to create the entry without specifying the type. This won't work because it will try to create as a standard entry, which doesn't allow system:* to be passed in the kubernetesGroups.

accessConfig:
  authenticationMode: API_AND_CONFIG_MAP
  accessEntries:
    - principalARN: arn:aws:iam::11111111111:role/KarpenterNodeInstanceProfile
      kubernetesUsername: system:node:{{EC2PrivateDNSName}}
      kubernetesGroups:
        - system:bootstrappers
        - system:nodes

The kubernetes group name system:nodes is invalid, it cannot start with system

Then, tried passing the type as EC2_LINUX, but eksctl is expecting a policyARN to be passed.

accessConfig:
  authenticationMode: API_AND_CONFIG_MAP
  accessEntries:
    - principalARN: arn:aws:iam::11111111111:role/KarpenterNodeInstanceProfile
      kubernetesUsername: system:node:{{EC2PrivateDNSName}}
      kubernetesGroups:
        - system:bootstrappers
        - system:nodes
      accessPolicies:
        - accessScope:
            type: EC2_LINUX

Error: accessEntries[0].policyARN must be set to a valid AWS ARN

I believe my second attempt is the correct approach, and that eksctl should not be requiring a policyARN to be passed for this type.

Logs
Error: accessEntries[0].policyARN must be set to a valid AWS ARN

Versions

eksctl version: 0.167.0-dev+4280da7cd.2023-12-22T09:42:31Z
kubectl version: v1.29.0
OS: darwin
Copy link
Contributor

Hello cmartintlh 👋 Thank you for opening an issue in eksctl project. The team will review the issue and aim to respond within 1-5 business days. Meanwhile, please read about the Contribution and Code of Conduct guidelines here. You can find out more information about eksctl on our website

@autarchprinceps
Copy link

autarchprinceps commented Dec 28, 2023

I was looking for the same thing. I think EC2 Linux is just plain not supported in eksctl yet. access policies -> access scope -> type is for cluster vs namespace apply of access policies. The standard, fargate, ec2 linux & windows type field doesn't appear in the config file schema documentation at least. I noticed the fargate one is automatically created with the profile, so one probably doesn't need to manually create that. But for Karpenter instead of managed Nodegroups one does need the EC2 types. For now I still used iam identity mapping for them, which still works if you specifiy authenticationMode: API_AND_CONFIG_MAP and don't go full API only yet. Still, +1 for a feature request to finalise support for access entries. Otherwise you won't get all to far with it.

Beyond that I think kubernetesGroups & kubernetesUsername is not specifyable for EC2 Linux. It just has the one prefilled value valid for that. Won't help you to create it with eksctl though, but you'll need to remove that once they do add support.

@cmartintlh
Copy link
Author

Beyond that I think kubernetesGroups & kubernetesUsername is not specifyable for EC2 Linux. It just has the one prefilled value valid for that. Won't help you to create it with eksctl though, but you'll need to remove that once they do add support.

Yep, looks like you're correct. I just went through the creation in the EKS console and those are not specifiable. Good catch.

The resulting access entry from the console only included the system:nodes group, but it looks like this will be fine. The ClusterRoleBinding that was used for the system:bootstrappers also applies to system:nodes.

@mikestef9
Copy link
Collaborator

Ya this isn't supported in eksctl yet. You could add a dummy managed node group of size 0 using the same node role as a temporary workaround. We are prioritizing adding this functionality in a near future release.

Copy link
Contributor

This issue is stale because it has been open 30 days with no activity. Remove stale label or comment or this will be closed in 5 days.

@github-actions github-actions bot added the stale label Jan 28, 2024
@yuxiang-zhang yuxiang-zhang added priority/important-longterm Important over the long term, but may not be currently staffed and/or may require multiple releases and removed stale labels Jan 29, 2024
@DanielCKennedy
Copy link

DanielCKennedy commented Jan 29, 2024

We need a type field under accessEntries for this to work.

Example:

accessConfig:
  authenticationMode: API_AND_CONFIG_MAP
  accessEntries:
    - principalARN: arn:aws:iam::11111111111:role/KarpenterNodeInstanceProfile
      type: EC2_LINUX

ref: https://docs.aws.amazon.com/cli/latest/reference/eks/create-access-entry.html

@serg4kostiuk
Copy link

serg4kostiuk commented Apr 22, 2024

hey guys,
I've seen that there is a PR and closed, but I am faced with the same

accessConfig:
  bootstrapClusterCreatorAdminPermissions: true
  authenticationMode: API_AND_CONFIG_MAP
  accessEntries:
    - principalARN: arn:aws:iam::11111111111:role/eksctl-staging-eks-129-no-NodeInstanceRole-MBHnnnbowvxB
      kubernetesUsername: system:node:{{EC2PrivateDNSName}}
      kubernetesGroups:
        - system:bootstrappers
        - system:nodes
      accessPolicies:
        - accessScope:
            type: EC2_LINUX
    - principalARN: arn:aws:iam::11111111111:role/eksctl-staging-eks-129-no-NodeInstanceRole-uHc2uRD144gE
      kubernetesUsername: system:node:{{EC2PrivateDNSName}}
      kubernetesGroups:
        - system:bootstrappers
        - system:nodes
      accessPolicies:
        - accessScope:
            type: EC2_LINUX

EKS version: "1.29"
[email protected] % eksctl version 0.175.0
error:

[email protected] % eksctl utils update-authentication-mode -f staging-eks-129.yaml
Error: accessEntries[0].policyARN must be set to a valid AWS ARN

could anyone suggest how to fix a behaviour?

@cmartintlh
Copy link
Author

hey guys, I've seen that there is a PR and closed, but I am faced with the same

accessConfig:
  bootstrapClusterCreatorAdminPermissions: true
  authenticationMode: API_AND_CONFIG_MAP
  accessEntries:
    - principalARN: arn:aws:iam::11111111111:role/eksctl-staging-eks-129-no-NodeInstanceRole-MBHnnnbowvxB
      kubernetesUsername: system:node:{{EC2PrivateDNSName}}
      kubernetesGroups:
        - system:bootstrappers
        - system:nodes
      accessPolicies:
        - accessScope:
            type: EC2_LINUX
    - principalARN: arn:aws:iam::11111111111:role/eksctl-staging-eks-129-no-NodeInstanceRole-uHc2uRD144gE
      kubernetesUsername: system:node:{{EC2PrivateDNSName}}
      kubernetesGroups:
        - system:bootstrappers
        - system:nodes
      accessPolicies:
        - accessScope:
            type: EC2_LINUX

EKS version: "1.29" [email protected] % eksctl version 0.175.0 error:

[email protected] % eksctl utils update-authentication-mode -f staging-eks-129.yaml
Error: accessEntries[0].policyARN must be set to a valid AWS ARN

could anyone suggest how to fix a behaviour?

Your config is incorrect. You just need this:

accessConfig:
  bootstrapClusterCreatorAdminPermissions: true
  authenticationMode: API_AND_CONFIG_MAP
  accessEntries:
    - principalARN: arn:aws:iam::11111111111:role/eksctl-staging-eks-129-no-NodeInstanceRole-MBHnnnbowvxB
       type: EC2_LINUX
    - principalARN: arn:aws:iam::11111111111:role/eksctl-staging-eks-129-no-NodeInstanceRole-uHc2uRD144gE
       type: EC2_LINUX

Groups and usernames cannot be set for the EC2_LINUX types.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/bug priority/important-longterm Important over the long term, but may not be currently staffed and/or may require multiple releases
Projects
None yet
Development

Successfully merging a pull request may close this issue.

6 participants