Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

AKS using system assigned identities on VMSS to access other Azure resources #1704

Closed
jmos5156 opened this issue Jul 2, 2020 · 12 comments
Closed

Comments

@jmos5156
Copy link

jmos5156 commented Jul 2, 2020

Hello.

In deploying an AKS cluster in private mode I see that three managed identities are created in my case.

  1. MI for the kube dashboard
  2. MI for the OMS agent
  3. MI for the agent pool

I cannot find any documentation surrounding the usage of these MI's. Is there any available that will describe what the AKS cluster deployment does when adding profiles/nodes to a deployment?

Further to this, we want to grant access to Keyvault from the pods. Under normal IaaS processes, we create a system-assigned identity to Azure VMSS and this is enough to grant implicit access to the pods running on the worker nodes. Is this the correct way of granting this level of permission? If so how can we enable the system assigned identity at deploy time? There doesn't seem to be anything directly related in ARM resource Microsoft.ContainerService/managedCluster.

On deploying the AKS cluster, a managed identity is created (see point 3 above) yet granting this access to other Azure resources does not seem to work for the pods, hence using the system-assigned ones on the vmss's themselves

Many Thanks

@aristosvo
Copy link

aristosvo commented Jul 2, 2020

https://github.com/Azure/aad-pod-identity is AFAIK the solution to explicitly grant access to resources. You can assign different Managed Identities to different (groups of ) pods by labelling them.

See also #1480 for inclusion as an AKS add-on

@jmos5156
Copy link
Author

jmos5156 commented Jul 2, 2020

Yes, I agree this seems to be the method employed currently but I have also experienced that without deploying AAD POD and enabling System-Assigned Identity (SI) at the vmss level also seems to allow the pods to gain access to other Azure resources where the SI is granted permissions (ie. KeyVault).

It also seems like the latest API version of the microsoft.containerservice/managedclusters the resource has several Service principal configurations that make for a confusing setup.

We have servicePrincipalProfile and Identity listed in the documentation but limited information about what they are and how they affect the cluster creation. When created there is an agentpool MI but it doesn't seem to be assigned to anything so what should the method be? VMSS SI or AAD POD?

The only benefit of AAD POD is the granularity of permissions created which I guess real difference.

@adhodgson1
Copy link

I have developers unintentionally going down the same path as @jmos5156 where they grant the identity at the VMSS level access to external resources and this works for every pod on the cluster. It would be good to know the future of this workaround going forward so I can push people in the right direction.

@jmcshane
Copy link

jmcshane commented Jul 7, 2020

The documentation for the managed identities can be found here: https://docs.microsoft.com/en-us/azure/aks/use-managed-identity

@jmos5156
Copy link
Author

jmos5156 commented Jul 9, 2020

I think I've hit another problem. When assigning the identity at VMSS level, and K8 upgrade will recreate the node(s) and this clears the configuration from the scale set. Probably a 'hidden' reason for not using SI and referring back to AAD POD solution.

@github-actions
Copy link

Action required from @Azure/aks-pm

@TomGeske
Copy link

jmos5156: For KV access you should consider using kubelet user assigned MI or use pod identity

The removal of custom added MIs is a know issue. We are working on fixing it.

#1650

@jmos5156
Copy link
Author

Hello,
Pod Identity does not yet work on windows nodes.
Which user is assigned to the kubelet? Is it the one named agentpool? If so granting this user access to KV does not work.

@TomGeske
Copy link

Okay, understand.

The one called agentpool should work. Are you using FlexVol or secret store CSI driver to access KV?

Please, share your configuration.

@TomGeske TomGeske self-assigned this Jul 24, 2020
@ghost ghost added the action-required label Aug 18, 2020
@ghost ghost added the stale Stale issue label Oct 17, 2020
@ghost
Copy link

ghost commented Oct 17, 2020

This issue has been automatically marked as stale because it has not had any activity for 60 days. It will be closed if no further activity occurs within 15 days of this comment.

@ghost ghost removed stale Stale issue action-required labels Oct 26, 2020
@ghost ghost added the action-required label Nov 21, 2020
@ghost ghost added the stale Stale issue label Jan 20, 2021
@ghost
Copy link

ghost commented Jan 20, 2021

This issue has been automatically marked as stale because it has not had any activity for 60 days. It will be closed if no further activity occurs within 15 days of this comment.

@ghost ghost closed this as completed Feb 4, 2021
@ghost
Copy link

ghost commented Feb 4, 2021

This issue will now be closed because it hasn't had any activity for 15 days after stale. jmos5156 feel free to comment again on the next 7 days to reopen or open a new issue after that time if you still have a question/issue or suggestion.

@ghost ghost locked as resolved and limited conversation to collaborators Mar 6, 2021
This issue was closed.
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

No branches or pull requests

6 participants