-
Notifications
You must be signed in to change notification settings - Fork 503
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
util: support load Colon-separated KUBECONFIG #761
Conversation
Signed-off-by: Morlay <[email protected]>
215ef92
to
c838121
Compare
} | ||
return clientcmd.NewDefaultClientConfig(*apiConfig, &clientcmd.ConfigOverrides{}), nil | ||
} | ||
return kubernetes.ConfigFromContext(endpointName, s) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should we still need this?
github.com/docker/cli/cli/context/kubernetes
not provide a way to load Colon-separated KUBECONFIG
now changes could support all KUBECONFIG, and will not execute this line any more
$ echo $KUBECONFIG
:/Users/furkan.turkal/.kube/config:/Users/furkan.turkal/.kube/foo:/Users/furkan.turkal/.kube/bar:/Users/furkan.turkal/.kube/baz This returns me the following error for this commit:
|
this pr should fix it. |
why your kubeconfig value startswith |
🤷♂️I removed |
$ docker buildx create --name=kubeconfig --driver=kubernetes --driver-opt=namespace=gitlab
kubeconfig
$ docker buildx inspect kubeconfig
Name: kubeconfig
Driver: kubernetes
Nodes:
Name: kubeconfig0
Endpoint: kubernetes:///kubeconfig?deployment=&kubeconfig=%2FUsers%2Fmorlay%2F.kube%2Fconfig--hw-dev.yaml%3A%2FUsers%2Fmorlay%2F.kube%2Fconfig--hw-infra.yaml%3A%2FUsers%2Fmorlay%2F.kube%2Fconfig--hw-sg.yaml%3A%2FUsers%2Fmorlay%2F.kube%2Fconfig--qc-hk.yaml
Status: inactive
Platforms:
$ docker buildx inspect kubeconfig --bootstrap
[+] Building 21.9s (1/1) FINISHED
=> [internal] booting buildkit 21.8s
=> => waiting for 1 pods to be ready 21.5s
Name: kubeconfig
Driver: kubernetes
Nodes:
Name: kubeconfig0-787cbbd5d9-4fpf7
Endpoint:
Status: running
Platforms: linux/amd64, linux/arm64, linux/riscv64, linux/ppc64le, linux/s390x, linux/386, linux/mips64le, linux/mips64, linux/arm/v7, linux/arm/v6 Works well for me. |
Should we remove empty files? |
For me, my export KUBECONFIG=$(echo $(ls ~/.kube/config--*.yaml) | sed 's/ /:/g') |
@morlay Just applied your pull patch in my own fork and it seems working now but: $ kubectl config current-context
kind-cluster-api-test $ go run cmd/buildx/main.go create --name=kubeconfig --driver=kubernetes --driver-opt=namespace=gitlab
kubeconfig $ go run cmd/buildx/main.go inspect kubeconfig
Name: kubeconfig
Driver: kubernetes
Nodes:
Name: kubeconfig0-787cbbd5d9-d2d9w
Endpoint:
Status: running
Platforms: linux/amd64, linux/386 But getting the following issue and not sure if it's related to this, any ideas?
|
Why your This $ kubectl config current-context
hw-dev
$ kubectl config get-contexts
CURRENT NAME CLUSTER AUTHINFO NAMESPACE
* hw-dev hw-dev user1
hw-infra hw-infra hw-infra
hw-sg hw-sg hw-sg
qc-hk default default |
@AkihiroSuda still lgty? |
fix #437 #760