Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

terraform/metadata empty after repeated builds #136

Open
kgugle opened this issue Oct 31, 2020 · 5 comments
Open

terraform/metadata empty after repeated builds #136

kgugle opened this issue Oct 31, 2020 · 5 comments

Comments

@kgugle
Copy link

kgugle commented Oct 31, 2020

Hey!

I have a pipeline that:

  1. Builds an EC2 instance A
  2. Provides the public IP of A to some internal resource B

When I run a build the terraform/metadata file is correct and contains the output variables I've specified in my terraform config. However, when I run the same build again, the file is empty as shown below. I'd like for the second build to also include all the output variables (similar to terraform cli) regardless of whether the infrastructure was updated or not.

// Build 1:
name: <secret-name>
metadata: {"instance_state":"running","public_dns":"ec2-xx-xx-xx-xx.us-west-2.compute.amazonaws.com","public_ip":"xx.xxx.xx.xx"}

// Build 2 (no changes to infrastructure, just rebuilding):
name: <secret-name>
metadata: {}

Is the intended behavior? And if so, any good ways you know to get around this?

I was thinking I could run a terraform output after or directly grab the new .tfstate file from my S3 backend.

@ljfranklin
Copy link
Owner

@kgugle the outputs should be present regardless of whether there's a diff or not. Could you share the relevant snippets of your pipeline config? Specifically I'm curious if this is one Concourse job that does a put to Terraform followed by another task or multiple Concourse jobs where the second job does a get on the Terraform resource.

@jryan128
Copy link

jryan128 commented Nov 13, 2020

I have the same issue.

Here's an abridged pipeline config:

 - name: terraform-plan-apply-dev
    serial: true
    serial_groups: [ dev-deploy ]
    on_failure: ((slack_notifications.put_failure_notification))
    on_error: ((slack_notifications.put_failure_notification))
    plan: ...
      - put: terraform
        params:
            plan_only: true  
      - put: terraform

  - name: deploy-new-image-to-dev
    serial: true
    serial_groups: [ dev-deploy ]
    on_failure: ((slack_notifications.put_failure_notification))
    on_error: ((slack_notifications.put_failure_notification))
    plan:
      - get: terraform
      - task: ecs-update-service-if-needed # A task that references the terraform/metadata file
        config: *deploy_to_ecs_task
        params:
          CLUSTER: ...

@ljfranklin
Copy link
Owner

Try adding - {get: terraform, passed: [terraform-plan-apply-dev]} and see if that avoids the issue. There might be an bug with our check implementation but trying out that workaround would help with getting to the root cause.

@derjust
Copy link

derjust commented Jan 29, 2021

Same issue here:
Sometimes a job in Concourse only sees the second to last version of the Terraform resource. It is "properly" reported in the Concourse UI, ie. it read plan_only: true as the job input, even though there exists a newer version.

We {get: terraform, passed: [terraform-plan-apply-dev]} but that didn't help.
This is probably because the 'plan-only' version of the state was also a valid output of the previous job (terraform-plan-apply-dev in this example)

It would be great if the get operation would at least allow to filter for plan_only: false as at least for us the output changes barely so that this issue would go away

@ljfranklin
Copy link
Owner

I just pushed this change to the ljfranklin/terraform-resource:latest and ljfranklin/terraform-resource:0.14.5 images which maybe addresses this issue. Seems like what's happening is there's a potential ordering issue when a single job does multiple put steps to the resource where the first version produced will be detected as "latest" rather than the second. The potential workaround I pushed tries to sidestep the issue by ensuring the metadata file get populated even if the first plan version gets pulled.

Another possible workaround is to change:

    plan: ...
      - put: terraform
        params:
            plan_only: true  
      - put: terraform

to:

    plan: ...
      - put: terraform

Since it doesn't seem like that initial plan step is buying you much anyway. Normally folks use a plan step to run against a PR branch without actually changing anything. Or they run a plan job followed by a manually triggered apply job. That way a human can review the plan output prior to triggering the apply job.

Let me know if you're still seeing the issue after pulling the latest image in case I misunderstood the bug.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants