Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

util: support load Colon-separated KUBECONFIG #761

Merged
merged 1 commit into from
Sep 21, 2021

Conversation

morlay
Copy link
Collaborator

@morlay morlay commented Sep 8, 2021

fix #437 #760

@morlay morlay force-pushed the kubeconfig-enhance branch from 215ef92 to c838121 Compare September 8, 2021 09:53
}
return clientcmd.NewDefaultClientConfig(*apiConfig, &clientcmd.ConfigOverrides{}), nil
}
return kubernetes.ConfigFromContext(endpointName, s)
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should we still need this?

github.com/docker/cli/cli/context/kubernetes not provide a way to load Colon-separated KUBECONFIG
now changes could support all KUBECONFIG, and will not execute this line any more

@Dentrax
Copy link
Contributor

Dentrax commented Sep 8, 2021

$ echo $KUBECONFIG
:/Users/furkan.turkal/.kube/config:/Users/furkan.turkal/.kube/foo:/Users/furkan.turkal/.kube/bar:/Users/furkan.turkal/.kube/baz

This returns me the following error for this commit:

Error: could not read kubeconfig: stat : no such file or directory

@morlay
Copy link
Collaborator Author

morlay commented Sep 8, 2021

@Dentrax

this pr should fix it.
the new kubeconfig load follow kubectl did

@morlay
Copy link
Collaborator Author

morlay commented Sep 8, 2021

why your kubeconfig value startswith :

@Dentrax
Copy link
Contributor

Dentrax commented Sep 8, 2021

why your kubeconfig value startswith :

🤷‍♂️I removed : prefix and still getting same error. 🤔

@morlay
Copy link
Collaborator Author

morlay commented Sep 8, 2021

$ docker buildx create --name=kubeconfig --driver=kubernetes --driver-opt=namespace=gitlab
kubeconfig
$ docker buildx inspect kubeconfig                                                        
Name:   kubeconfig
Driver: kubernetes

Nodes:
Name:      kubeconfig0
Endpoint:  kubernetes:///kubeconfig?deployment=&kubeconfig=%2FUsers%2Fmorlay%2F.kube%2Fconfig--hw-dev.yaml%3A%2FUsers%2Fmorlay%2F.kube%2Fconfig--hw-infra.yaml%3A%2FUsers%2Fmorlay%2F.kube%2Fconfig--hw-sg.yaml%3A%2FUsers%2Fmorlay%2F.kube%2Fconfig--qc-hk.yaml
Status:    inactive
Platforms: 
$ docker buildx inspect kubeconfig --bootstrap
[+] Building 21.9s (1/1) FINISHED                                                                                                                                           
 => [internal] booting buildkit                                                                                                                                       21.8s
 => => waiting for 1 pods to be ready                                                                                                                                 21.5s
Name:   kubeconfig
Driver: kubernetes

Nodes:
Name:      kubeconfig0-787cbbd5d9-4fpf7
Endpoint:  
Status:    running
Platforms: linux/amd64, linux/arm64, linux/riscv64, linux/ppc64le, linux/s390x, linux/386, linux/mips64le, linux/mips64, linux/arm/v7, linux/arm/v6

Works well for me.

@tonistiigi
Copy link
Member

Should we remove empty files?

@morlay
Copy link
Collaborator Author

morlay commented Sep 9, 2021

Should we remove empty files?

kubectl didn't, so i don't think we should add file exists checking.

For me, my KUBECONFIG set like this to make sure files exists.

export KUBECONFIG=$(echo $(ls ~/.kube/config--*.yaml) | sed 's/ /:/g')

@Dentrax
Copy link
Contributor

Dentrax commented Sep 9, 2021

@morlay Just applied your pull patch in my own fork and it seems working now but:

$ kubectl config current-context
kind-cluster-api-test
$ go run cmd/buildx/main.go create --name=kubeconfig --driver=kubernetes --driver-opt=namespace=gitlab
kubeconfig
$ go run cmd/buildx/main.go inspect kubeconfig
Name:   kubeconfig
Driver: kubernetes

Nodes:
Name:      kubeconfig0-787cbbd5d9-d2d9w
Endpoint:
Status:    running
Platforms: linux/amd64, linux/386

But getting the following issue and not sure if it's related to this, any ideas?

$ cat ~/.docker/buildx/instances/kubeconfig
{"Name":"kubeconfig","Driver":"kubernetes","Nodes":[{"Name":"kubeconfig0","Endpoint":"kubernetes:///kubeconfig?deployment=\u0026kubeconfig=%3A%2FUsers%2Ffurkan.turkal%2F.kube%2Fconfig%3A%2FUsers%2Ffurkan.turkal%2F.kube%2Ffoo%3A%2FUsers%2Ffurkan.turkal%2F.kube%2Fbar%3A%2FUsers%2Ffurkan.turkal%2F.kube%2Fbaz%3A%2FUsers%2Ffurkan.turkal%2Fauth%2Fk8s-auto%2FS_PLATFORM_P1_6PLATFORM_MOON","Platforms":null,"Flags":null,"ConfigFile":"","DriverOpts":{"namespace":"gitlab"}}],"Dynamic":false}

$ kubectl config get-contexts
CURRENT   NAME                                            CLUSTER                        AUTHINFO           NAMESPACE
*         kubernetes-admin@s-platform-p1-6platform-moon   s-platform-p1-6platform-moon   kubernetes-admin

$ kubectl cluster-info
Kubernetes control plane is running at ...
Metrics-server is running at ...
...

$ kubectl get pods -n gitlab
NAME                           READY   STATUS    RESTARTS   AGE
kubeconfig0-787cbbd5d9-d2d9w   1/1     Running   0          27m

$ ./buildx build -t test:simple-golang-app -f Dockerfile .
[+] Building 0.0s (0/0)
Error: no valid drivers found: cannot determine Kubernetes namespace, specify manually: invalid configuration: [context was not found for specified context: kubernetes-admin@s-platform-p1-6platform-moon, cluster has no server defined]

@morlay
Copy link
Collaborator Author

morlay commented Sep 9, 2021

@Dentrax

Why your kubectl config current-context not match kubectl config get-contexts ?

This * should point like

$ kubectl config current-context
hw-dev
$ kubectl config get-contexts
CURRENT   NAME       CLUSTER    AUTHINFO   NAMESPACE
*         hw-dev     hw-dev     user1      
          hw-infra   hw-infra   hw-infra   
          hw-sg      hw-sg      hw-sg      
          qc-hk      default    default  

@tonistiigi
Copy link
Member

@AkihiroSuda still lgty?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

error when KUBECONFIG contains more than one file
4 participants