-
Notifications
You must be signed in to change notification settings - Fork 153
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Always update in in-place #207
Comments
I dont have the problem with
|
Seeing the same issue with release 1.0.0 Terraform plan shows changes pending.
Terraform apply runs cleanly, but running plan shows the same changes still pending. |
@remche , @matthewmelvin could you please provide your RKE tf file?? |
|
@matthewmelvin , removing your Calico cloud provider should be defined like that, but in this case not needed due to is using default value. Anyway, if you want to define it..
|
Changing the network block does remove...
... from the list of changes that are always pending. 👍 But I'm still left with with the same changes as koboltmarky originally reported...
... even after multiple applies. |
@remche, your issue should be addressed with PR #215 @matthewmelvin , are you still having false diff issues?? I've tested with your tf file and not getting diffs anymore. @koboltmarky, beside these arguments,
is tf marking to update any other argument?? These arguments are set as newComputed if |
@koboltmarky , @remche that's weird. I've tested using your both configs and working fine to me, no updated once the cluster is created. I'm not able to reproduce your issue now. Are you both using last provider release, v1.0.1?? Have you applied tf plan at least once with the new provider release??
|
I was trying with a built from source version. I can confirm I get the same with 1.0.1. Debug log attached. |
I'm creating new cluster with your config, but not getting same result, not being able to reproduce issue. Have you tried creating cluster new cluster with same config?? Are you getting same issue on that case?? |
Yes, I recreate cluster from scratch (new vm). I did a new run with both apply stages if that make sense. |
I have often destroy and recreate different cluster, same behavior every time. With v1.0.1 it is the same. |
@rawmind0 I discovered that when I disable |
@koboltmarky , using this file, adapted from yours, rke cluster is deploying fine and no updates.... Could you please take a look to check if something is different on yours??
|
some small differences:
|
@koboltmarky, about differences
i don't think number of the nodes or use internal address makes any difference
Updated and tested
Updated and tested
Updated and tested
Updated and tested RKE cluster is still deployed fine and getting not diff on next |
@remche, could you please provide your |
tf block is here : https://github.com/remche/terraform-openstack-rke/blob/5b4dfd8075171e4d589267f17cb5337b48e7165e/modules/rke/main.tf#L114-L128 resulting tfstate block : ...
"cloud_provider": [
{
"aws_cloud_config": [],
"aws_cloud_provider": [],
"azure_cloud_config": [],
"azure_cloud_provider": [],
"custom_cloud_config": "",
"custom_cloud_provider": "",
"name": "openstack",
"openstack_cloud_config": [],
"openstack_cloud_provider": [
{
"block_storage": [
{
"bs_version": "",
"ignore_volume_az": false,
"trust_device_path": false
}
],
"global": [
{
"auth_url": "https://url.domain.tld:5000/v3",
"ca_file": "",
"domain_id": "default",
"domain_name": "",
"password": "xxxxxx",
"region": "",
"tenant_id": "xxxxxxxxxxxxxxxxxxxxxxxxx",
"tenant_name": "",
"trust_id": "",
"user_id": "",
"username": "xxxxxx"
}
],
"load_balancer": [
{
"create_monitor": false,
"floating_network_id": "",
"lb_method": "",
"lb_provider": "",
"lb_version": "",
"manage_security_groups": false,
"monitor_delay": "",
"monitor_max_retries": 0,
"monitor_timeout": "",
"subnet_id": "",
"use_octavia": false
}
],
"metadata": [
{
"request_timeout": null,
"search_order": null
}
],
"route": [
{
"router_id": null
}
]
}
],
"vsphere_cloud_config": [],
"vsphere_cloud_provider": []
}
], that's weird that the |
@rawmind0 i'm using a tf module module "rancher-admin-cluster" {
source = "./modules/rancher-admin-cluster"
providers = {
rancher2 = rancher2.bootstrap
}
kubernetes_version = "v1.18.3-rancher2-2"
node_username = "rancher"
ssh_key_file_name = "~/.ssh/id_rsa"
rancher_version = "v2.4.5"
rancher_admin_url = "xxxxxx"
rancher_admin_ip = "xxxxx"
docker_registry_url = "xxxxxxx"
docker_registry_user = var.docker_registry_user
docker_registry_password = var.docker_registry_password
cluster_domain = "xxxxxx"
cert_manager_version = "0.15.0"
rancher_admin_password = var.rancher_admin_password
} |
Unfortunately I am still seeing the same persistent diff with the latest version.
issue-207-test-1594177150.txt |
Same behaviour in my case, here some informations if it can help:
i was able to remove perpetual update on the three
but still have :
Interesting fact In this first case, when i add this configuration, no more update-in place occurs ! (i'm trying to move from cluster_yaml file params to regular params and this params match my current configuration):
Second case, another fresh cluster deployed from terraform and regular arguments (not cluster_yaml) did not have this behaviour when no change are expected, but if i had a new config, the same configurations items are listed for update Added configuration (No change detected on previous plan) :
Plan:
Both clusters are using :
|
Updated |
I still get few false update when reapplying 😢
|
@remche , unfortunately i don't have any openstack installation to test, but i've added some provider debug info on PR #239 Configuring provider with debug (not terraform debug) and log file like
you'll get debug messages on
Could you please test it and take a look on debug messages saying what is changing on your apply?? |
The main issue is caused by hashicorp/terraform-plugin-sdk#98 . The workaround is rechecking arguments on |
But note that with #239 I get lot of things marked as update-in-place :
Thx for the hard work on this ! |
@remche , could you please also provide debug lines with the argument changes?? The most interesting is the change on
|
On nodes, what is trying to change?? is trying to shift the nodes list order?? |
Here is the sanitized output. I'm not sure how to interpret it. Nodes list order seems the same. |
Thanks @remche .. I think i found the cause of your issue taking a look to your logs The
I've updated PR #239 fixing The |
🥳 |
Each
terraform apply
creates an update in-place.The following variables are marked as updated:
~ cluster_cidr = "10.42.0.0/16" -> (known after apply)
~ cluster_dns_server = "10.43.0.10" -> (known after apply)
~ cluster_domain = "lalala.local" -> (known after apply)
~ kube_config_yaml = (sensitive value)
~ rke_cluster_yaml = (sensitive value)
~ rke_state = (sensitive value)
Used versions:
Terraform v0.12.24
The text was updated successfully, but these errors were encountered: