-
Notifications
You must be signed in to change notification settings - Fork 193
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Support for AAD pod-managed identities #936
Comments
for the |
PR(#939) would add customized |
As far as I can see (device_perf_linux.go) this feature depends on querying the Azure API for data about disks, and tweaking stuff in /sys? The first should use AAD Pod ID, so would work fine with |
The /csi/csi.sock unix socket mount in the node pod does seem to have issues with I am mostly focusing on the controller pod at the moment, since that has the largest attack surface. |
I don't think it's the socket mount that is creating problems, but rather that azuredisk is not reaching the listening phase. It stops at: |
@scoof device_perf is not an essential feature(disabled by default now), I think that's ok, you could go with |
I have the node pods working with |
device_perf fails because the devices in |
@scoof device_perf depends on IMDS, so it won't work if hostNetwork is disabled, that' expected: azuredisk-csi-driver/pkg/optimization/skus.go Lines 120 to 131 in c8f7537
About how to set cloud config, you could follow: https://github.com/kubernetes-sigs/azuredisk-csi-driver#prerequisite |
But the whole point of this exercise is to use the Pod Identity IMDS instead of the node IMDS, since that will allow us to constrain the identities the pod is able to assume. |
@scoof just disable hostNetwork since device_perf optimization feature is not a required feature for now. Maybe we could leverage Pod Identity IMDS to get node info in the future. |
I think we're talking past each other. Device_perf works fine with Pod Identity, when configured in the same manner as the controller, and when adding a hostpath volumemount to /sys/devices. |
My problem is that there's no way to inject an azure.json that is specific to this particular deployment - the Secret method relies on a hard coded name, which will clash with any other pods using the same method with the same name.
I think the latter is the better option, and I can provide a PR for that if you wish |
@scoof secret name is hardcoded here: azuredisk-csi-driver/pkg/azuredisk/azure.go Lines 56 to 57 in c8f7537
do you think we should specify I could make driver code change also, it's not a big change. |
@andyzhangx since we already have a way of pointing to an alternative azure.json, I think it would make more sense to just have the helm chart add a secret, mount that secret and point to it using AZURE_CREDENTIAL_FILE, that would allow the user to just specify the contents |
@scoof that's two options: 1) let driver read from different secrets 2) project different secrets to driver cloud config path. |
I have no preference of one over the other. |
Is your feature request related to a problem?/Why is this needed
I thought I had support for pod-managed identities by providing a way to label the pods, but as I dug deeper, I realized it wasn't enough.
Describe the solution you'd like in detail
There are currently two things that needs to be changed to fully support pod-managed identities:
I have this working by mounting a manually configured ConfigMap in the pod, and using the azure-cred-file configmap to point to this CM.
Disabling hostNetwork is easy enough, but injecting azure.json with at least tenant, subscription and resource-group is a bit harder, since we need to pick up those from somewhere. I don't know if the only option is to provide them as values to the helm chart and render the azure.json file manually.
This should also remove the requirement to mount any hostPath volumes in the pod.
Describe alternatives you've considered
The alternative is to keep using the node managed identity, and that would be a less secure option, since that would delegate too many rights to the CSI driver.
The text was updated successfully, but these errors were encountered: