Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

oc project not works with cluster proxy. #647

Closed
wanghaoran1988 opened this issue Nov 20, 2020 · 10 comments
Closed

oc project not works with cluster proxy. #647

wanghaoran1988 opened this issue Nov 20, 2020 · 10 comments

Comments

@wanghaoran1988
Copy link
Member

We are using a proxy as the apiserver that can reverse proxy to multiple clusters, so we has config like:

apiVersion: v1
clusters:
- cluster:
    server: http://localhost:8001/backplane/cluster/1h2cu64jqeufe3dnk7gvpl7j3ui3msut/
  name: 1h2cu64jqeufe3dnk7gvpl7j3ui3msut
contexts:
- context:
    cluster: 1h2cu64jqeufe3dnk7gvpl7j3ui3msut
    namespace: openshift-logging
    user: haoran.openshift
  name: default/1h2cu64jqeufe3dnk7gvpl7j3ui3msut/haoran.openshift
current-context: default/1h2cu64jqeufe3dnk7gvpl7j3ui3msut/haoran.openshift
kind: Config
preferences: {}
users:
- name: haoran.openshift
  user:
    exec:
      apiVersion: client.authentication.k8s.io/v1beta1
      args:
      - /Users/haowang/.kube/ocm-token
      command: bash
      env: null

When we perform oc project
It will create a new context&server&user as follows:

apiVersion: v1
clusters:
- cluster:
    server: http://localhost:8001/backplane/cluster/1h2cu64jqeufe3dnk7gvpl7j3ui3msut/
  name: 1h2cu64jqeufe3dnk7gvpl7j3ui3msut
- cluster:
    server: http://localhost:8001/backplane/cluster/1h2cu64jqeufe3dnk7gvpl7j3ui3msut/
  name: localhost:8001
contexts:
- context:
    cluster: 1h2cu64jqeufe3dnk7gvpl7j3ui3msut
    namespace: openshift-monitoring
    user: haoran.openshift
  name: default/1h2cu64jqeufe3dnk7gvpl7j3ui3msut/haoran.openshift
- context:
    cluster: localhost:8001
    namespace: default
    user: system:serviceaccount:openshift-backplane-srep:89bb76f8aa82917707763cfb8c4a01a5/localhost:8001
  name: default/localhost:8001/system:serviceaccount:openshift-backplane-srep:89bb76f8aa82917707763cfb8c4a01a5
current-context: default/localhost:8001/system:serviceaccount:openshift-backplane-srep:89bb76f8aa82917707763cfb8c4a01a5
kind: Config
preferences: {}
users:
- name: haoran.openshift
  user:
    exec:
      apiVersion: client.authentication.k8s.io/v1beta1
      args:
      - /Users/haowang/.kube/ocm-token
      command: bash
      env: null
- name: system:serviceaccount:openshift-backplane-srep:89bb76f8aa82917707763cfb8c4a01a5/localhost:8001
  user: {}

oc project should respect the current context, and only change the namespace in the current-context

@feichashao
Copy link
Contributor

feichashao commented Jan 11, 2021

The current oc project <project-name> will always generate a new cluster&context based on the RESTConfig, and fetch the username from whoami. Link.

if len(userNameInUse) == 0 {  // <--- Always true here.
	user, err := project.WhoAmI(o.RESTConfig)
	if err != nil {
		return fmt.Errorf("unable to default to a user name: %v", err)
	}
	userNameInUse = user.Name
}

kubeconfig, err := cliconfig.CreateConfig(projectName, userNameInUse, o.RESTConfig)

CreateConfig will generate the cluster name based on host:port with getClusterNicknameFromConfig, which doesn't respect the cluster name set in kubeconfig. Also, CreateConfig only copy the BearerToken from RESTConfig, and drops other info like ExecProvider. Link.

This works fine if the kubeconfig was generated by oc login. But if the kubeconfig was pre-generated by other tools or was made manually by user, information could be lost.

To make it works for both cases, we can just re-use the info from current context, and change the namespace to current project name. If there's no current context | no cluster info | no authinfo (how can it happen?) , we can then generate a new one from CreateConfig.

Not sure if it would break anything.

Update:
Users may pass --context --cluster --user options to oc, so the source of truth should be RESTConfig instead currentContext.

@wanghaoran1988
Copy link
Member Author

@soltysh Hi, could you take a look at this issue?

@openshift-bot
Copy link
Contributor

Issues go stale after 90d of inactivity.

Mark the issue as fresh by commenting /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
Exclude this issue from closing by commenting /lifecycle frozen.

If this issue is safe to close now please do so with /close.

/lifecycle stale

@openshift-ci-robot openshift-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Apr 11, 2021
@wanghaoran1988
Copy link
Member Author

/remove-lifecycle stale

@openshift-ci-robot openshift-ci-robot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Apr 12, 2021
@openshift-bot
Copy link
Contributor

Issues go stale after 90d of inactivity.

Mark the issue as fresh by commenting /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
Exclude this issue from closing by commenting /lifecycle frozen.

If this issue is safe to close now please do so with /close.

/lifecycle stale

@openshift-ci openshift-ci bot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jul 11, 2021
@openshift-bot
Copy link
Contributor

Stale issues rot after 30d of inactivity.

Mark the issue as fresh by commenting /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.
Exclude this issue from closing by commenting /lifecycle frozen.

If this issue is safe to close now please do so with /close.

/lifecycle rotten
/remove-lifecycle stale

@openshift-ci openshift-ci bot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Aug 10, 2021
@clcollins
Copy link
Member

/remove-lifecycle rotten

@openshift-ci openshift-ci bot removed the lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. label Sep 8, 2021
@wanghaoran1988
Copy link
Member Author

This should already fixed in #692

@soltysh
Copy link
Contributor

soltysh commented Oct 18, 2021

/close

@openshift-ci
Copy link
Contributor

openshift-ci bot commented Oct 18, 2021

@soltysh: Closing this issue.

In response to this:

/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@openshift-ci openshift-ci bot closed this as completed Oct 18, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

6 participants