Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[BUG] rancher2_cluster_v2's rke_config[0].etcd[0].s3_config[0].endpoint doesn't seem to be flexible to a value change during the applied Terraform #1413

Open
irishgordo opened this issue Sep 20, 2024 · 0 comments
Labels

Comments

@irishgordo
Copy link

Rancher Server Setup

Information about the Cluster

  • Kubernetes version: v1.29.8-k3s1
  • Downstream RKE2 Version: v1.29.8+rke2r1
  • Imported Harvester v1.3.2
  • Harvester Node Driver

User Information

  • What is the role of the user logged in? Admin

Provider Information

  • What is the version of the Rancher v2 Terraform Provider in use? v5.0.0
  • What is the version of Terraform in use? v1.9.3

Describe the bug

Let's say you are setting up an S3 Compatible endpoint via Terraform somewhere/of-some-kind/at-some-accessilble-place, in this instance it's using Harvester terraform provider to set up a VM running MinIO that was configured w/ Ansible (Ansible Terraform Provider ) + some cloud-init. It just seems to be complaining that it didn't know the ipv4 address of the rke_config[0].etcd[0].s3_config[0].endpoint for the rancher2_cluster_v2 (even when introducing depends_on props in the resources) with:

╷
│ Error: Provider produced inconsistent final plan
│
│ When expanding the plan for rancher2_cluster_v2.rke2-terraform to include new values learned so far during apply, provider "registry.terraform.io/rancher/rancher2" produced an invalid new value for
│ .rke_config[0].etcd[0].s3_config[0].endpoint: was cty.StringVal(""), but now cty.StringVal("https://192.168.104.251:9000").
│
│ This is a bug in the provider, which should be reported in the provider's own issue tracker.

Workaround

  • just re-applying the terraform apply works to overcome the bug

To Reproduce

Please read:

This relys heavily on having a single-node Harvester v1.3.2, Rancher v2.9.1 (applied w/ vcluster addon manifest linked in rancher server setup above)

Actual Result

  • it just shows off that error, the implementing the workaround overcomes that error

Expected Result

  • maybe not show an error, maybe allow for the empty string at first but knowing that when the things it may depends_on finish maybe it will have a value that changes? I guess this all depends... if someone's Terraform is cultivating an S3 Compatible resource on the fly or not... 🤷 🤔 ... but I'm not sure, maybe this is bad practice 😅 - super willing to close this out if what is taking place is totally expected

Screenshots

Additional context

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

1 participant