-
Notifications
You must be signed in to change notification settings - Fork 9.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Support use of computed values in import blocks IDs #34152
Comments
Hi @Rwwn, Thanks for filing the issue. You cannot import a resource if the import data is unknown, because Terraform needs to have the imported first resource in order to complete the plan. If what you are looking for is a method for handling multiple phases of apply to deal with unknown values, we are tracking the general problem space under #30937. Your attempt with the data source to lookup the import data is the right approach, but from what I can see here it's only prevented by the |
Hi @jbardin, thanks for the response. What I'm trying to achieve relies on the depends_on being there, because the volumes are only created during the Terraform run - specifically, the creation of the EC2 from the AMI is what creates the EBS volumes I want to import. The AMI has several block devices associated with it, which result in volumes being created from snapshots when an instance is created using the AMI - but these won't be present in the state. So in short, all I have at the start of the plan is the AMI and the snapshots; the volumes get created during apply, and I need to delay the data source and import blocks from running until the volumes are created, hence the depends_on. Sorry if that wasn't clear, I understand it's a bit convoluted. I might be barking up the wrong tree with this solution, so I'm happy to hear any other workarounds as well. On reflection, I suppose the problem with my solution is similar to the one with using computed values for for_each and count given in the issue you linked at #30937 - if the volume ID for the import isn't known until apply, what is Terraform supposed to do when planning the aws_ebs_volume resources? It makes the plan conditional on the outcome of the apply, so I see the problem. If this was allowed, you'd have situations where the plan could not predict what Terraform would do to the EBS volumes on the apply, which is probably not desired. I may have to resort to importing these volumes manually after the apply after all, which is a bit of a shame. The comment here #30937 (comment) describing lazy applies sounds like it would solve my use case. Whether allowing this is a good idea or not is another question. |
Thanks @Rwwn, that's helpful information. The fact that this is attempting to import something which is implicitly created during the same run is not something the current |
Just in case anyone else comes across this, I've got a workaround using Terragrunt post hooks to run Terraform output and import commands in. It works well, but of course it requires an external tool in the shape of Terragrunt. I couldn't find any way to do it in Terraform natively, outside of writing a script to do multiple Terraform invocations. |
@jbardin I ran into a similar issue with the vpc_route_table's default local route that is created. Do you know of any way in tf 1.7 to be able to manage these routes within the initial plan/apply? The below code would throw the following error if doing it from a clean apply.
Snippet of code: ################################################################################
# Publiс Subnets
################################################################################
resource "aws_subnet" "public" {
#checkov:skip=CKV_AWS_130:Public subnet needs public ips
for_each = toset(local.azs)
availability_zone = each.key
cidr_block = local.public_subnet_cidr[index(local.azs, each.key)]
map_public_ip_on_launch = true
vpc_id = aws_vpc.this.id
tags = merge(
{ "Name" = "${local.name}-public-${each.key}" },
local.tags,
local.public_subnet_tags
)
}
# Public Route Table
resource "aws_route_table" "public" {
for_each = toset(local.azs)
vpc_id = aws_vpc.this.id
tags = merge(
{ "Name" = "${local.name}-public-rt-${each.key}" },
local.tags,
)
}
import {
for_each = toset(local.azs)
to = aws_route.public_to_firewall[each.key]
id = "${aws_route_table.public[each.key].id}_${local.vpc_cidr}"
}
resource "aws_route" "public_to_firewall" {
for_each = toset(local.azs)
route_table_id = aws_route_table.public[each.key].id
destination_cidr_block = local.vpc_cidr
vpc_endpoint_id = tomap(local.az_to_endpoint)[each.key]
timeouts {
create = "5m"
}
}
|
No, there's no way to do this with a single operation right now. Import must happen during the planning phase, but the resources you want to import don't exist until after apply is complete. |
Yea in that case i think my only option is to manage the routes as part of the route table. That won't throw the same error. |
Terraform Version
Use Cases
I have an EC2 instance which I would like to optionally restore from a backup AMI via Terraform. This AMI contains snapshots of the instance's EBS volumes (these volumes are also managed in my Terraform configuration as their own resources), so when an instance is built with it, it builds the EBS volumes as well. The downside of this is that Terraform has no way of knowing about these volumes created by the AMI restore, so they won't be tracked in the state. So the only way to do it I can see is this:
Having to manually import the EBS volumes and run two applies is not ideal. I'd like to be able to do all of this with a single apply.
Attempted Solutions
The first thing I tried was to use the ebs_block_device blocks inline on the aws_instance resource, rather than separate ebs_volume resources. Terraform recognised the restored EBS volumes in this case, but it always wanted to destroy and recreate them, along with the server itself, even when the parameters on the ebs_block_device blocks in the code matched the ones in the state. So that didn't work.
My second attempt involved using Terraform's new import blocks. The relevant code looks like this:
So I'm using the snapshot IDs from my AMI's data source as inputs to an EBS volume data source to find the volumes built from these snapshot IDs. I've used a depends_on for the EBS volume data source to delay its creation until after the EC2 instance is created, since this is when the EBS volumes associated with the AMI are restored from snapshots. I've then added import blocks to import each of these using the EBS volume data source to retrieve the volume IDs.
The code I've provided won't work currently for the case where var.restore_from_backup_enabled == false, since the EBS volume data sources won't be created, but I can get around that when for_each for the imports is released. I think the logic should work for the case where var.restore_from_backup_enabled == true though, but when I try it, Terraform complains that the IDs for the imports are not known at plan time:
Proposal
It should be possible for import blocks to use values only computed at apply time in their inputs, similar to how many resources do it. The plan output would show this:
And then the import would be performed at apply time, after the value is computed.
Without this feature, import blocks are of limited usefulness, since you have to know the IDs of what you're importing ahead of running Terraform. This is an unfortunate limitation when Terraform is quite capable of finding these IDs for you from data sources.
References
No response
The text was updated successfully, but these errors were encountered: