Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Error when switching cluster from zonal to regional #6

Closed
kupson opened this issue Mar 25, 2021 · 2 comments
Closed

Error when switching cluster from zonal to regional #6

kupson opened this issue Mar 25, 2021 · 2 comments

Comments

@kupson
Copy link
Contributor

kupson commented Mar 25, 2021

Warning: changing cluster type removes and re-creates whole cluster. I changed type strictly for testing.

Errors like:

Error: Get "http://localhost/api/v1/namespaces/nginx-ingress": dial tcp [::1]:80: connect: connection refused

Upstream issue - hashicorp/terraform-provider-kubernetes#1102

Workaround:

./scripts/terraform_local_dev.sh apply -target=module.gke.google_container_cluster.primary
./scripts/terraform_local_dev.sh apply -target=helm_release.cert_manager
./scripts/terraform_local_dev.sh apply

Second line is to fix #5.

@kupson
Copy link
Contributor Author

kupson commented Mar 26, 2021

There is another error when switching between zonal<=>regional. The module.gcloud_no_default_standard_storageclass.module.gcloud_kubectl.null_resource.run_command resource tries to remove annotation from non-existing cluster and fails.

To fix the problem you can remove problematic entries from TF state (as they don't exists any more anyway):

./scripts/terraform_local_dev.sh state list | grep no_default
./scripts/terraform_local_dev.sh state rm module.gcloud_no_default_standard_storageclass.module.gcloud_kubectl.null_resource.run_destroy_command
...

kupson added a commit that referenced this issue Apr 10, 2021
Fixes:
- #4

Should also fix:
- #5
- #6
@kupson
Copy link
Contributor Author

kupson commented Apr 11, 2021

I tried to make it work but there are problems:

  • terraform is confused when used-to-exists cluster is completely replaced by a new one (failing on missing CRDs while checking state of resources, missing data sources etc)
  • even after careful dance with multiple terraform invocations with different -target module.cluster-* options there are plenty of resources left in limbo (especially around Ingress settings)

Like I suspected initially - This settings cannot be changed on existing cluster. Full cluster re-creation required. already in README.md.

@kupson kupson closed this as completed Apr 11, 2021
kupson added a commit that referenced this issue Apr 11, 2021
…ring

Didn't solve #6 but it's still better IMHO
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant