Skip to content
This repository has been archived by the owner on Jul 26, 2022. It is now read-only.

IAM role configuration #164

Closed
res0nat0r opened this issue Sep 18, 2019 · 4 comments
Closed

IAM role configuration #164

res0nat0r opened this issue Sep 18, 2019 · 4 comments

Comments

@res0nat0r
Copy link

res0nat0r commented Sep 18, 2019

I have a question about the snippet below from the readme:

Specify each parameter using the --set key=value[,key=value] argument to helm install. For example,

bash helm install --name kubernetes-external-secrets \
--set env.POLLER_INTERVAL_MILLISECONDS='300000' \
--set podAnnotations."iam\.amazonaws\.com/role"='Name-Of-IAM-Role-With-SecretManager-Access' \
charts/kubernetes-external-secrets

Use IAM credentials for Secrets Manager access
If not running on EKS you will have to use an IAM user (in lieu of a role).

Does the above assume that the role you are referencing via Name-Of-IAM-Role-With-SecretManager-Access, is applied directly to the k8s nodes of the cluster? Just creating an IAM role with only SecretsManagerReadWrite access and applying it to the helm install via the pod annotation isn't sufficient to grant read/write access to any secrets.

If the above code is assuming the arn you're supplying is directly applied to the k8s nodegroup itself, then isn't this a security vulnerability? If the full r/w IAM policy is directly attached to the k8s nodes, then this means anyone running any pod in the k8s cluster can read/write secrets automatically due to the temporary creds they can get via the AWS metadata server at http://169.254.169.254/

@reubenavery
Copy link

the annotation example provided is for use with kube2iam, an excellent piece of tooling, you should definitely check it out.

@mbonig
Copy link

mbonig commented Oct 10, 2019

the annotation example provided is for use with kube2iam, an excellent piece of tooling, you should definitely check it out.

if that's the case then it needs to be better documented. The current examples make it sound like it'll 'just work'.

@Flydiverny
Copy link
Member

Flydiverny commented Nov 8, 2019

To use the secrets manager or system manager backends you have to provide the pod with AWS credentials in some way. How users choose to do that varies, I'd assume that if you have a cluster in place you probably already set this up somehow, if you are just starting heres some examples:

  • IRSA
  • kiam or kube2iam
  • Providing AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY to the pod
  • node access

So there's many options on how to provide AWS credentials, using tools like kube2iam or kiam (which both use the annotation used in the example helm call iam.amazonaws.com/role) is certainly one way.

You could also grant your nodes direct access to your secrets, sure, I wouldn't recommend it 😄

If one wants to further control how things are accessed you could do a setup where the credentials provided to the external secrets pod only has access to assuming other roles.
With this you could then set iam.amazonaws.com/permitted (annotation name is configurable) to allow different roles to be assumed in different namespaces.
Then by setting iamRole in your external secret spec you can tell the controller to assume this role, if permitted by the namespace, before fetching the secret.

If you are using IRSA you might have to use the workaround mentioned here #161 (comment)

PRs for updated docs are most welcome! :)

Personally moved to kiam from kube2iam, but I would definitely look at IRSA if you are just starting up on AWS.

@res0nat0r
Copy link
Author

Thanks @Flydiverny ! Will check these options out.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants