-
Notifications
You must be signed in to change notification settings - Fork 85
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
terraform/metadata empty after repeated builds #136
Comments
@kgugle the outputs should be present regardless of whether there's a diff or not. Could you share the relevant snippets of your pipeline config? Specifically I'm curious if this is one Concourse job that does a |
I have the same issue. Here's an abridged pipeline config: - name: terraform-plan-apply-dev
serial: true
serial_groups: [ dev-deploy ]
on_failure: ((slack_notifications.put_failure_notification))
on_error: ((slack_notifications.put_failure_notification))
plan: ...
- put: terraform
params:
plan_only: true
- put: terraform
- name: deploy-new-image-to-dev
serial: true
serial_groups: [ dev-deploy ]
on_failure: ((slack_notifications.put_failure_notification))
on_error: ((slack_notifications.put_failure_notification))
plan:
- get: terraform
- task: ecs-update-service-if-needed # A task that references the terraform/metadata file
config: *deploy_to_ecs_task
params:
CLUSTER: ... |
Try adding |
Same issue here: We It would be great if the |
I just pushed this change to the Another possible workaround is to change:
to:
Since it doesn't seem like that initial Let me know if you're still seeing the issue after pulling the latest image in case I misunderstood the bug. |
Hey!
I have a pipeline that:
When I run a build the terraform/metadata file is correct and contains the output variables I've specified in my terraform config. However, when I run the same build again, the file is empty as shown below. I'd like for the second build to also include all the output variables (similar to terraform cli) regardless of whether the infrastructure was updated or not.
Is the intended behavior? And if so, any good ways you know to get around this?
I was thinking I could run a terraform output after or directly grab the new .tfstate file from my S3 backend.
The text was updated successfully, but these errors were encountered: