Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ConcurrentUpdateException for resource aws_appautoscaling_policy #17915

Open
afischer-opentext-com opened this issue Mar 4, 2021 · 3 comments
Labels
enhancement Requests to existing resources that expand the functionality or scope.

Comments

@afischer-opentext-com
Copy link

Community Note

  • Please vote on this issue by adding a 👍 reaction to the original issue to help the community and maintainers prioritize this request
  • Please do not leave "+1" or other comments that do not add relevant new information or questions, they generate extra noise for issue followers and do not help prioritize the request
  • If you are interested in working on this issue or have submitted a pull request, please leave a comment

Terraform CLI and Terraform AWS Provider Version

Terraform v0.14.7
Terraform aws provider plugin v3.30.0

Affected Resource(s)

  • aws_appautoscaling_policy

Terraform Configuration Files

I have a Terraform code base which in parallel creates many aws_appautoscaling_policy resources. As by the nature of the code base (which invokes the count attribute making things optional), it is not possible to define dependencies for all of the resources as workaround.

Expected Behavior

When creating multiple aws_appautoscaling_policy resources, the terraform apply call should not fail and in case it is observing a ConcurrentUpdateException a retry attempt should happen.

Actual Behavior

When creating multiple aws_appautoscaling_policy resources, the terraform apply call may fail with error

Error: Failed to create scaling policy: Error putting scaling policy: ConcurrentUpdateException: You already have a pending update to an Auto Scaling resource.

Reviewing the code in https://github.com/hashicorp/terraform-provider-aws/blob/main/aws/resource_aws_appautoscaling_policy.go#L232 it seems there is already a retry attempt in case of a timeout. Maybe this can be enhanced the way that a retry also happens in case of a ConcurrentUpdateException.

References

There is a blog post https://keita.blog/2018/01/29/aws-application-auto-scaling-for-ecs-with-terraform/ which describes a workaround for the issue but this is not really applicable and scaling and it is just a workaround.

@github-actions github-actions bot added the needs-triage Waiting for first response or review from a maintainer. label Mar 4, 2021
@breathingdust breathingdust added enhancement Requests to existing resources that expand the functionality or scope. and removed needs-triage Waiting for first response or review from a maintainer. labels Sep 8, 2021
@nicolaei
Copy link
Contributor

nicolaei commented Dec 6, 2021

I'm seeing the same issue with concurrent creations of aws_appautoscaling_scheduled_action using count or for_each.

@brentw-square
Copy link

Running into the same issue that doesn't allow me to leverage count or for_each when creating scheduled actions. The two workarounds are as follows:

  • Create your scheduled actions separately, and chain the ones using the same resource together with depends_on i.e. Action 1, Action 2 depends on Action 1, Action 3 depends on Action 2, etc. This technically works, but gets messy fast, because then you’ll end up copy-pasting large blocks of the same terraform code (see the example reference from the OP).
    • This could work better if you could pass in dynamic strings to the depends_on field, but Terraform doesn’t allow you to do that.
  • Set parallelism to 1 to avoid concurrency updates. Again, this technically works, but because parallelism is at a module level and can’t be configured for specific resources (outstanding TF request), it would slow down the entire TF module.

I'm currently going with the first workaround. As mentioned, though, this gets extremely messy as soon as you have more than 2-3 actions, since you have to maintain your own dependency chain. Implementing a lock at a Scheduled Action level, rather than for a given Scaling Target, should fix the issue.

@correalenon
Copy link

I'm facing the same issue in aws_appautoscaling_target using for_each.
The reference cited would work, but it is unfeasible, as it would be necessary to create several times the same resource. If depends_on could be dynamic it could solve the problem.
Unfortunately it seems like a dead thread, with no resolution for 3 long years.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement Requests to existing resources that expand the functionality or scope.
Projects
None yet
Development

No branches or pull requests

6 participants