-
Notifications
You must be signed in to change notification settings - Fork 404
IAM role configuration #164
Comments
the annotation example provided is for use with kube2iam, an excellent piece of tooling, you should definitely check it out. |
if that's the case then it needs to be better documented. The current examples make it sound like it'll 'just work'. |
To use the secrets manager or system manager backends you have to provide the pod with AWS credentials in some way. How users choose to do that varies, I'd assume that if you have a cluster in place you probably already set this up somehow, if you are just starting heres some examples: So there's many options on how to provide AWS credentials, using tools like kube2iam or kiam (which both use the annotation used in the example helm call You could also grant your nodes direct access to your secrets, sure, I wouldn't recommend it 😄 If one wants to further control how things are accessed you could do a setup where the credentials provided to the external secrets pod only has access to assuming other roles. If you are using IRSA you might have to use the workaround mentioned here #161 (comment) PRs for updated docs are most welcome! :) Personally moved to kiam from kube2iam, but I would definitely look at IRSA if you are just starting up on AWS. |
Thanks @Flydiverny ! Will check these options out. |
I have a question about the snippet below from the readme:
Does the above assume that the role you are referencing via
Name-Of-IAM-Role-With-SecretManager-Access
, is applied directly to the k8s nodes of the cluster? Just creating an IAM role with only SecretsManagerReadWrite access and applying it to the helm install via the pod annotation isn't sufficient to grant read/write access to any secrets.If the above code is assuming the arn you're supplying is directly applied to the k8s nodegroup itself, then isn't this a security vulnerability? If the full r/w IAM policy is directly attached to the k8s nodes, then this means anyone running any pod in the k8s cluster can read/write secrets automatically due to the temporary creds they can get via the AWS metadata server at http://169.254.169.254/
The text was updated successfully, but these errors were encountered: