-
Notifications
You must be signed in to change notification settings - Fork 45
kubectl configuration file issue MacOSX #54
Comments
Hi @hawksight, Thanks for all the info. I'll have to try this myself and check that I can deliver a fix for it quickly. It seems straight forward from what you're observing. |
Hi @hawksight , I've tried with the latest release of the provider: resource "kind_cluster" "test" {
name = "test"
wait_for_ready = true
kind_config {
kind = "Cluster"
api_version = "kind.x-k8s.io/v1alpha4"
networking {
api_server_address = "0.0.0.0"
}
}
} I'm unable to reproduce your issue. In my test, the above config produces this kubeconfig:
You can check out the extra test, that I added for this specifically on this branch: https://github.com/tehcyx/terraform-provider-kind/tree/api_server_networking If you're able to double check your provider version and if that matches the current release, as well as maybe if possible check, that the test is working on your machine too, as it should fail, if the config is not set correctly. Do you have only podman on your machine or both podman and docker? Does the api server change work on vanilla kind without terraform for you?
This I actually expect, as the endpoint is marked as a PS: The test-suite takes a while to run, on my branch you can run just the networking tests (which is two tests) with this command |
Hey @tehcyx, thanks for looking into this and now I feel silly because you are correct that when I create a cluster with kind without the terraform provider, I get the same issue: kind create cluster --name=test --config kind-cluster.yaml kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
networking:
# ipFamily: ipv6
apiServerAddress: 0.0.0.0 k config view | yq e .clusters - - cluster:
certificate-authority-data: DATA+OMITTED
server: https://:51376
name: kind-test Looking at my base cluster, I must have already patched it as it is set to I've had a look for issues on kind and have found people had a similar issue with solution here: I have only podman as I removed docker for licence reasons.
I created a new directory for terraform and setup fresh. So I had the latest version of your provider as above. From running the tests as you suggested, both tests do pass:
And on the second test, I get the same apparent issue as the config file looked like this: apiVersion: v1
clusters:
- cluster:
certificate-authority-data: <REDACTED>
server: https://:52511
name: kind-tf-acc-networking-7925541741874413366
contexts:
- context:
cluster: kind-tf-acc-networking-7925541741874413366
user: kind-tf-acc-networking-7925541741874413366
name: kind-tf-acc-networking-7925541741874413366
current-context: kind-tf-acc-networking-7925541741874413366
kind: Config
preferences: {}
users:
- name: kind-tf-acc-networking-7925541741874413366
user:
client-certificate-data: <REDACTED>
client-key-data: <REDACTED> For reference test 1 output: apiVersion: v1
clusters:
- cluster:
certificate-authority-data: <REDACTED>
server: https://127.0.0.1:52283
name: kind-tf-acc-networking-7925541741874413366
contexts:
- context:
cluster: kind-tf-acc-networking-7925541741874413366
user: kind-tf-acc-networking-7925541741874413366
name: kind-tf-acc-networking-7925541741874413366
current-context: kind-tf-acc-networking-7925541741874413366
kind: Config
preferences: {}
users:
- name: kind-tf-acc-networking-7925541741874413366
user:
client-certificate-data: <REDACTED>
client-key-data: <REDACTED> I really did have to be quick to copy those :) |
Interesting edge case to say the least. So the fix is to manually override kubeconfig after creation as of now? Hopefully they can fix it upstream and I can then create a new version of the provider that can pull in the new kind version. |
Let's close this issue and wait to see that we'll have this fixed on upstream. |
On MacOS using
podman
(&podman machine
), although I assume it would be the same for Docker Desktop users too.In order to use Kind k8s api, you need to open the apiServerAdress on
0.0.0.0
rather than the default127.0.0.1
.Eg a kind config file with:
If you leave it as default (127.0.0.1) then everything just times out, but the configuration file is correct, eg.
So I translated that config file into this awesome provider instead:
This was almost perfect until I tried to connect to the new cluster. I kept getting the following error:
A bit of a red herring as I found that the generated config file was slightly incorrect. The
api_server_address
input wasn't propagated to the config file. Neither the separate one, or the config in my defaultKUBECONFIG
file.It's a simple fix to edit the
server
field to include the0.0.0.0
but I think this works when calling kind natively, without the module.I couldn't see where the kubectl config is generated but perhaps the custom setting isn't being passed or is being passed blank somewhere?
I have a workaround but thought others might see the same thing, so thought I'd share here to save some pain working it out.
[ EDIT ]
I looked in terraform state
tf state show kind_cluster.default
and the endpoint setting is also blankendpoint = "https://:57355"
for example.Also tried explicitly setting the address to
127.0.0.1
and that populates the config and state as expected:The text was updated successfully, but these errors were encountered: