Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support for separate users in same client #159

Open
Herrmlo opened this issue Nov 22, 2022 · 3 comments
Open

Support for separate users in same client #159

Herrmlo opened this issue Nov 22, 2022 · 3 comments

Comments

@Herrmlo
Copy link

Herrmlo commented Nov 22, 2022

Scenario:
I have two clusters in the same tenant, different subscriptions.
I have a separate account for each cluster with no permissions for the other cluster.
I'm just getting the credentials via az aks get-credentials and then using Device code flow or Azure CLI token login but it ended in the same issue.

Issue:
The account for whatever cluster I add first is used for all clusters. Works fine for the first cluster, but when switching to the other cluster it results in:
Error from server (Forbidden): pods is forbidden: User "xxx" cannot list resource "pods" in API group "" in the namespace "default": User does not have access to the resource in Azure. Update role assignment to allow access.
Whether the configuration reside in the same kubeconfig or different ones does not change the behavior.
There is always only one aad token file in the .kube/cache/kubelogin

By executing kubelogin remove-tokens I can reset this and reauthenticate with the correct user, but I would expect there to be a seemless solution?

Example kubeconfig for reference, as the exec part is completly identical this might be the issue?:

apiVersion: v1
clusters:
- cluster:
    certificate-authority-data: dummy
    server: https://aks-westeurope-dev-123123.hcp.westeurope.azmk8s.io:443
  name: aks-westeurope-dev
- cluster:
    certificate-authority-data: dummy
    server: https://aks-westeurope-prod-123123.hcp.westeurope.azmk8s.io:443
  name: aks-westeurope-prod
contexts:
- context:
    cluster: aks-westeurope-dev
    user: clusterUser_rg-dev-aks-aks-westeurope-dev
  name: aks-westeurope-dev
- context:
    cluster: aks-westeurope-prod
    user: clusterUser_rg-prod-aks-aks-westeurope-prod
  name: aks-westeurope-prod
current-context: aks-westeurope-dev
kind: Config
preferences: {}
users:
- name: clusterUser_rg-dev-aks-aks-westeurope-dev
  user:
    exec:
      apiVersion: client.authentication.k8s.io/v1beta1
      args:
      - get-token
      - --login
      - azurecli
      - --server-id
      - 7de93456-fca8-44fa-8bf8-650d7e77c2e6
      command: kubelogin
      env: null
      interactiveMode: IfAvailable
      provideClusterInfo: false
- name: clusterUser_rg-prod-aks-aks-westeurope-prod
  user:
    exec:
      apiVersion: client.authentication.k8s.io/v1beta1
      args:
      - get-token
      - --login
      - azurecli
      - --server-id
      - 7de93456-fca8-44fa-8bf8-650d7e77c2e6
      command: kubelogin
      env: null
      interactiveMode: IfAvailable
      provideClusterInfo: false

How do I work around this?

@weinong
Copy link
Contributor

weinong commented Nov 23, 2022

First of all, we removed token caching behavior from azurecli login to address #137 since v0.0.21.

I think the root of the issue is that you are sharing multiple users using the same environment. The CLI, regardless of az or kubelogin, needs a hint (az logout or kubelogin remove-tokens) in between these different users. I don't see how a better seamless integration can be achieved?

@Herrmlo
Copy link
Author

Herrmlo commented Nov 23, 2022

As I can be logged in with multiple users in the az cli I hoped that either kubelogin could notice which one has permissions (it seemed to behave like this when I was not using kubelogin yet in older az cli versions, where as long as I was logged in with both users I had access to both clusters) or give me a way to define which user should be used.

@m-v-k
Copy link

m-v-k commented Mar 7, 2023

i'm also in need of multiple users support, and want to switch between them all the time.
i normally just use multiple context in my kubeconfig, but with this kubelogin for auth, this doesn't seem to work, it uses the same (cached) token for all context. I think because the the cached token is matched by a hash of the command arguments.

workaround -> set argument --token-cache-dir to a different folder on each user (make it subfolders of the default .../kubelogin for example).
then create multiple contexts, each with their unique user.
now you can authenticate with multiple users inside same k8s cluster & oidc client, and can switch user by switching context as normal.

suggestion -> add support for argument for adding extra oidc arguments, this way we can utilize it for adding login_hint or domain_hint for example.
example in other kubelogin tool --oidc-auth-request-extra-params https://github.com/int128/kubelogin/blob/master/docs/usage.md

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants