-
Notifications
You must be signed in to change notification settings - Fork 423
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Roles with paths do not work when the path is included in their ARN in the aws-auth configmap #268
Comments
Running into the same issue here on EKS |
Ahh....this explains our issue when testing with AWS SSO-created roles too. See the issue referenced in this document. This has been a problem for a quite a while (at least 14 months). Pertinent passage: When we stumbled across this I assumed it was something about the SSO role but based on this issue it's probably the path. |
We don't use EKS, but have had this issue with 1.12 and 1.14.6 with aws-iam-authenticator. If you edit the configmap to remove the My co-worker and I suspect that is because of the way that We have a role If you assume the role, and run
I wish this was fixed, as of now, I'm not sure what to do other than creating a role with a shortened path and switch to it. I suppose one can also just edit the role that gets input to the configmap itself. |
Yeah, removing the path is how I identified it as the cause of the issue. The field name is I opened this so others running into the issue might find it, and also because I think something needs to address it, whether its documentation (though I don't think docs are sufficient without changing the name of the field in the configmap) or a bugfix |
We just discovered the same, by using
and comparing the roles that the Pod uses (containing a path) vs. the one that are set in the token (path missing). For now our workaround is also adding a role mapping to an IAM Role that "doesn't actually exist". |
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
/remove-lifecycle stale |
I was able to reproduce this issue. I created two roles: aws iam create-role \
--role-name K8s-Admin \
--description "Kubernetes administrator role (for AWS IAM Authenticator for Kubernetes)." \
--assume-role-policy-document '{"Version":"2012-10-17","Statement":[{"Effect":"Allow","Principal":{"AWS":"arn:aws:iam::<account id>:root"},"Action":"sts:AssumeRole","Condition":{}}]}' \
--output text \
--query 'Role.Arn'
aws iam create-role \
--role-name K8s-Admin-WithPath \
--path "/kubernetes/" \
--description "Kubernetes administrator role (for AWS IAM Authenticator for Kubernetes)." \
--assume-role-policy-document '{"Version":"2012-10-17","Statement":[{"Effect":"Allow","Principal":{"AWS":"arn:aws:iam::<account id>:root"},"Action":"sts:AssumeRole","Condition":{}}]}' \
--output text \
--query 'Role.Arn' Mapped them to the cluster with: eksctl create iamidentitymapping --cluster basic-demo --arn arn:aws:iam::<account id>:role/K8s-Admin--group system:masters --username iam-admin
eksctl create iamidentitymapping --cluster basic-demo --arn arn:aws:iam::<accound id>:role/kubernetes/K8s-Admin-WithPath --group system:masters --username iam-admin-withpath Then attached the AWS eksctl utils write-kubeconfig --cluster=basic-demo --profile=sandbox-k8s-admin --set-kubeconfig-context --region=us-east-2
kubectl get nodes
# returned list of nodes, expected Then switched over to the role with the path eksctl utils write-kubeconfig --cluster=basic-demo --profile=sandbox-k8s-admin-withpath --set-kubeconfig-context --region=us-east-2
kubectl get nodes
# error: You must be logged in to the server (Unauthorized) |
Any news on this? This is quite a weird behavior and hard to detect as an error. |
We are seeing this issue as well, any word on resolution? |
+1 |
I've enjoyed my 6+ hours lost to this. |
terraform workaround: join("/", values(regex("(?P<prefix>arn:aws:iam::[0-9]+:role)/[^/]+/(?P<role>.*)", <role-arn>))) I'm not sure this is still needed with |
This was a very easy work-around for us, thank you |
Any update? Seems that this is still an issue. |
Hello, I'm having the same issue with |
This caught me too today what a PIA indeed.. Can confirm that instance role with a path will not be able to auth against the cluster - hopefully this gets fixed soon.
Adding this in the hope it saves someone else a few hours of their life. |
A fix could be to have
https://awscli.amazonaws.com/v2/documentation/api/latest/reference/iam/get-role.html I could create a sample PR if that helps. |
/remove-lifecycle stale |
This didn't work for us on ARNs that contain nested "directories" in the path (e.g. replace(<role-arn>, "//.*//", "/") |
Excuse me, Can U show me what is username: gitlab-admin ? Thanks |
Mismo problema ... muchas gracias |
@nckturner as you added the tag "important-soon" more than 2 years ago, what is the reason to have this issue still present ? aws-iam-authenticator/pkg/arn/arn.go Line 43 in 85e5098
If using paths in IAM is a "bad practice" it should be said, but if not this bug could be a real blocker if you have two roles with the same name in different paths... And it also makes any automation very tricky. |
This is an important bug to fix. However, so far no contributor has provided a fix that has been merged. Anyone who is willing to follow the Kubernetes code of conduct is welcome to work on this. Related to that: if you'd like (ie, if anyone would like) this bug fixed, and are willing to offer a bounty, that offer might help move things forward. If people want to highlight this issue to the vendor, AWS, then please visit aws/containers-roadmap#573 and add a thumbs-up reaction. |
The Kubernetes project currently lacks enough contributors to adequately respond to all issues. This bot triages un-triaged issues according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
/remove-lifecycle stale |
My team just lost a few hours to this issue today. It'd be great to see it resolved. |
Same thing happened to my team today... |
/lifecycle frozen |
Any update ? using paths in IAM is a "bad practice" or not ? |
Lost two days on this probably should be fixed.... |
The following PR was merged, and appears to address the problem: #670, though it's unclear to me what the current effective status is, as I don't see any documentation updated as part of the pull request. |
Looks like it was merged it but there has not been a release since then. |
This change is live with EKS Access Entries, but is currently not looking at paths on roles in the aws-auth config map. |
I have a role with an ARN that looks like this:
arn:aws:iam::XXXXXXXXXXXX:role/gitlab-ci/gitlab-runner
. My aws-auth configmap was as follow:I repeated got unauthorized errors from the cluster until I updated the
rolearn
toarn:aws:iam::XXXXXXXXXXXX:role/gitlab-runner
. After that change my access worked as expected.If it makes a difference, I'm using assume-role on our gitlab-runner, and using
aws eks update-kubeconfig --region=us-east-1 --name=my-cluster
to get kubectl configured.The text was updated successfully, but these errors were encountered: