-
Notifications
You must be signed in to change notification settings - Fork 9.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Openstack: diffs didn't match during apply. This is a bug with Terraform ... #3662
Comments
One of the things I see in the error message is the following:
I see the same thing for volumes. When you increase/decrease, are you also modifying the floating IP and volume count? Does this problem go away if you remove floating IPs and volumes from the manifest (testing without the |
I'll give you some terraform plan/apply output in a private gist, but here's the "gist" of it. The floating ips, volumes, and instances are all driven off of the same count variable. Here's the bit of the config that handles that.
If an instance has an associated floating ip, a volume or both then when I raise the instance count there are problems with the diff for the existing instance. If the instance does not have a floating ip or a volume, then increasing the count does not cause a problem. Possibly of interest, if I run a 'plan' after increasing the count, the existing floating ip and/or volume show up as changed. You can check it out in the gist. Thanks! |
I'm seeing the same thing using the following simple configuration: variable "count" {
default = 3
}
resource "openstack_blockstorage_volume_v1" "test" {
count = "${var.count}"
name = "${format("test-%02f", count.index+1)}"
size = 1
}
resource "openstack_compute_instance_v2" "test" {
count = "${var.count}"
name = "${format("test-%02f", count.index+1)}"
security_groups = ["default"]
volume {
volume_id = "${element(openstack_blockstorage_volume_v1.test.*.id, count.index)}"
}
} Everything applies and is created successfully. Subsequent calls to But, once I modify the count and run + openstack_blockstorage_volume_v1.test.3
attachment.#: "" => "<computed>"
availability_zone: "" => "<computed>"
metadata.#: "" => "<computed>"
name: "" => "test-%!f(int=04)"
region: "" => "Calgary"
size: "" => "1"
volume_type: "" => "<computed>"
~ openstack_compute_instance_v2.test.0
volume.~1579122841.device: "" => "<computed>"
volume.~1579122841.id: "" => "<computed>"
volume.~1579122841.volume_id: "" => "${element(openstack_blockstorage_volume_v1.test.*.id, count.index)}"
~ openstack_compute_instance_v2.test.1
volume.~1579122841.device: "" => "<computed>"
volume.~1579122841.id: "" => "<computed>"
volume.~1579122841.volume_id: "" => "${element(openstack_blockstorage_volume_v1.test.*.id, count.index)}"
~ openstack_compute_instance_v2.test.2
volume.~1579122841.device: "" => "<computed>"
volume.~1579122841.id: "" => "<computed>"
volume.~1579122841.volume_id: "" => "${element(openstack_blockstorage_volume_v1.test.*.id, count.index)}"
+ openstack_compute_instance_v2.test.3
access_ip_v4: "" => "<computed>"
access_ip_v6: "" => "<computed>"
flavor_id: "" => "1"
flavor_name: "" => "<computed>"
image_id: "" => "<computed>"
image_name: "" => "<computed>"
name: "" => "test-%!f(int=04)"
network.#: "" => "<computed>"
region: "" => "Calgary"
security_groups.#: "" => "1"
security_groups.3814588639: "" => "default"
volume.#: "" => "1"
volume.~1579122841.device: "" => "<computed>"
volume.~1579122841.id: "" => "<computed>"
volume.~1579122841.volume_id: "" => "${element(openstack_blockstorage_volume_v1.test.*.id, count.index)}" When applying, I get 3 errors about diffs: one for each previous compute resource:
However, the new resources are created correctly and the volume information for the existing resources stays the same... so it looks like everything worked, despite the report of errors. @phinze Any idea why there would be a diff error in this case? Do the resources need some extra logic in them to handle cases of incrementing and decrementing? |
Looking at your example, I noticed that in the plan output the volume associated with all three instances has the same "identifier" (not sure what it's proper technical name is...):
|
Yes, that number is a hash of the |
I believe it's the same bug as this one #3885 |
Similar to #3885, I'm going to label this as a bug with core. |
Similar to the related reports of this, I'm going to close this issue in favor of #3449. |
I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues. If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further. |
I have a terraform config the brings up several different flavors of instances, each flavor has it's own count variable associated with it, so I can increase or decrease the number of that flavor as the fancy strikes me.
If I have a flavor (e.g. arfarf-compute) count set to 1 and increase it, I see the following message about the existing instance:
If the current count is 2 and I bump it to 3, I see an analogous messages about instances 0 and 1.
I set TF_LOG=DEBUG and capture the output from a 1 -> 2 expansion. I can share the whole thing (it's 1+MB), but I'm pretty sure that this is the interesting bit:
It seems to be missing info about the existing floating ip. This is an HP Helion cloud and does not have the
os-tenant-network
extension enabled, I've worked around that by specifying both the network uuid and name in the instance's network block. Perhaps that's involved here?Let me know if I can provide additional information.
g.
The text was updated successfully, but these errors were encountered: