Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Terraform plan wants to revert changes made by the autoscaler #462

Closed
4 tasks done
andrewnazarov opened this issue Apr 5, 2022 · 9 comments
Closed
4 tasks done
Assignees
Labels

Comments

@andrewnazarov
Copy link

andrewnazarov commented Apr 5, 2022

Readiness Checklist

  • I am running the latest version
  • I checked the documentation and found no answer
  • I checked to make sure that this issue has not already been filed
  • I am reporting the issue to the correct repository (for multi-repository projects)

Expected Behavior

The possibility to apply changes without reverting changes made by autoscaler.

Current Behavior

For our deployments we enabled autoscaling. First, it was just set via autoscale = true. The EC performed the procedure successfully. However, after some time, we noticed that the terraform plan sees the difference and wants to revert changes made by the autoscaler. We tried to define the autoscaling {} block in a hope that maybe it would help not to detect changes, but it didn't help. Note "16g" -> "8g" change:

~ elasticsearch {
            # (7 unchanged attributes hidden)
          ~ topology {
                id                        = "hot_content"
              ~ size                      = "16g" -> "8g"
                # (5 unchanged attributes hidden)
              ~ autoscaling {
                  + min_size          = "8g"
                    # (2 unchanged attributes hidden)
                }
            }
          ~ topology {
                id                        = "ml"
              ~ size                      = "2g" -> "4g"
                # (5 unchanged attributes hidden)
              ~ autoscaling {
                  ~ min_size          = "0g" -> "4g"
                    # (3 unchanged attributes hidden)
                }
            }
            # (2 unchanged blocks hidden)
        }

If we want to have a configurable autoscaling it's not obvious how to achieve it with ignore_changes.

## Terraform definition

resource "ec_deployment" "default" {
  name                   = var.deployment_name
  region                 = "gcp-${var.gcp_region}"
  version                = var.ec_version != "" ? var.ec_version : data.ec_stack.latest.version
  deployment_template_id = var.deployment_template_id

  elasticsearch {

    autoscale = var.autoscaling_enabled

    topology {
      id   = "hot_content"
      size = var.memory_hot_content

      dynamic "autoscaling" {
        for_each = var.autoscaling_enabled ? toset([1]) : toset([])
        content {
          min_size = var.memory_hot_content
        }
      }
    }

    topology {
      id   = "ml"
      size = var.memory_ml

      dynamic "autoscaling" {
        for_each = var.autoscaling_enabled ? toset([1]) : toset([])
        content {
          min_size = var.memory_ml
        }
      }
    }
  }

  kibana {}

  apm {}

}

Steps to Reproduce

  1. Enable autoscaling
  2. Run terraform apply again
  3. See the diff that it wants to revert changes

Context

We would like to have autoscaling configurable and a possibility to continue applying tf code without reverting changes.

Possible Solution

Your Environment

  • Version used: 0.4.0
  • Running against Elastic Cloud SaaS or Elastic Cloud Enterprise and version: Elastic Cloud SaaS
@andrewnazarov andrewnazarov added the bug Something isn't working label Apr 5, 2022
@andrewnazarov
Copy link
Author

One more thing is that if we do terraform apply against the corresponding plan the error appeared:

Error: failed updating deployment: 1 error occurred:
│ 	* api error: root.invalid_json_request: JSON request does not comply with schema ([]: [Enum string expected])

@andrewnazarov
Copy link
Author

andrewnazarov commented Apr 5, 2022

Ok, there are two separate issues:

  1. if autoscaling is enabled tf wants to revert changes, which could be fixed by ignore_changes. However, it's not clear how to deal with conditions. Probably we'll need to create separate resources to achieve this.
  2. if autoscaling configuration is added to the existing deployment the error appears. This is more likely the provider's issue (also mentioned here: Specifying autoscaling limits on tiers which do not support them results in an API error - JSON request does not comply with schema ([]: [Enum string expected]) #401 (comment))

@andrewnazarov
Copy link
Author

Ok, we've decided to opt-in for autoscaling. And rewrote the module in a way that autoscaling is not configurable and topology.size is ignored. Diff is more or less respectable. However the error

Error: failed updating deployment: 1 error occurred:
│ 	* api error: root.invalid_json_request: JSON request does not comply with schema ([]: [Enum string expected])

still prevents us from applying changes.

@dimuon
Copy link
Contributor

dimuon commented May 31, 2022

@andrewnazarov , regarding this issue

Error: failed updating deployment: 1 error occurred:
│ 	* api error: root.invalid_json_request: JSON request does not comply with schema ([]: [Enum string expected])

It looks like the problem is the one mentioned by @tobio in #401 (comment) - if you remove min_size from hot_content tier, the error should go away.

Also, the error message should be fixed in the provider 0.4.1.

@dimuon
Copy link
Contributor

dimuon commented Jun 9, 2022

@andrewnazarov , I think you can try to use autoscaling with configurable topology using something similar to the below snippet. The important part is to specify topology elements for all tiers and list them in alphabetical order (this is a limitation of the implementation - please see #336 (comment) for more details)

data "ec_stack" "latest" {
  version_regex = "latest"
  region        = var.ec_region
  lock          = true
}

variable "ec_region" {
  type = string
  default = "gcp-us-central1"
}

variable "ec_version" {
  type = string
  default = ""
}

resource "ec_deployment" "default" {
  region                 = var.ec_region
  deployment_template_id = "gcp-storage-optimized"
  name                   = "test"
  version                = var.ec_version != "" ? var.ec_version : data.ec_stack.latest.version

  elasticsearch {

    autoscale = true

    topology {
      id                        = "cold"
    }
    
    topology {
      id                        = "frozen"
    }
    
    topology {
      id                        = "hot_content"
      size = "2g"
      
      autoscaling {
	max_size          = "128g"
	max_size_resource = "memory"
      }
    }
    
    topology {
      id                        = "ml"
    }
    
    topology {
      id                        = "warm"
    }

  }

  lifecycle {
    ignore_changes = [
      elasticsearch[0].topology[2].size
    ]
  }
  
}

The snippet sets initial size for hot_content tier to 2g but uses ignore_changes to ignore future changes to the size that can be made by the autoscaler.

@dimuon
Copy link
Contributor

dimuon commented Jun 14, 2022

@tobio
Copy link
Member

tobio commented Jun 14, 2022

IMO this should be part of the autoscaling example within the provider docs.

dimuon pushed a commit to dimuon/terraform-provider-ec that referenced this issue Jun 17, 2022
If `autoscale` is set, all topology elements that
 - either set `size` in the plan or
 - have non-zero default `max_size` (that is read from the deployment templates's `autoscaling_max` value)
have to be listed in alphabetical order of their `id` fields,
even if their blocks don't specify other fields beside `id`.

Recommend use of `lifecycle`'s `ignore_changes` attribute.

Addresses elastic#462
@dimuon
Copy link
Contributor

dimuon commented Jun 17, 2022

@tobio , please have a look at #505

dimuon pushed a commit to dimuon/terraform-provider-ec that referenced this issue Jun 20, 2022
If `autoscale` is set, all topology elements that
 - either set `size` in the plan or
 - have non-zero default `max_size` (that is read from the deployment templates's `autoscaling_max` value)
have to be listed in alphabetical order of their `id` fields,
even if their blocks don't specify other fields beside `id`.

Recommend use of the `lifecycle`'s `ignore_changes` attribute to ignore potential changes that can be made by the autoscaler.

Addresses elastic#462
dimuon added a commit to dimuon/terraform-provider-ec that referenced this issue Jun 20, 2022
If `autoscale` is set, all topology elements that
 - either set `size` in the plan or
 - have non-zero default `max_size` (that is read from the deployment templates's `autoscaling_max` value)
have to be listed in alphabetical order of their `id` fields, even if their blocks don't specify other fields beside `id`.

Recommend use of the `lifecycle`'s `ignore_changes` meta-argument to ignore potential changes that can be made by the autoscaler.

Addresses elastic#462.
dimuon added a commit that referenced this issue Jun 20, 2022
If `autoscale` is set, all topology elements that
 - either set `size` in the plan or
 - have non-zero default `max_size` (that is read from the deployment templates's `autoscaling_max` value)
have to be listed in alphabetical order of their `id` fields, even if their blocks don't specify other fields beside `id`.

Recommend use of the `lifecycle`'s `ignore_changes` meta-argument to ignore potential changes that can be made by the autoscaler.

Addresses #462.
@dimuon
Copy link
Contributor

dimuon commented Jul 6, 2022

@andrewnazarov, I'm going to close the issue. Please feel free to reopen it or open a new one if the proposed workaround doesn't work for you.

@dimuon dimuon closed this as completed Jul 6, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

4 participants