-
Notifications
You must be signed in to change notification settings - Fork 89
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
ec_deployment: Support Autoscaling #258
Comments
@marclop how about having a single "autoscaling" object, with min/max values, rather two separate objects per topology? |
how's this look? resource "ec_deployment" "autoscale" {
name = "autoscaling_deployment"
# Mandatory fields
region = "us-east-1"
version = "7.11.2"
deployment_template_id = "aws-io-optimized-v2"
elasticsearch {
topology {
id = "hot_content"
# Optional: Default value "memory"
size_resource = "memory"
# Current for the deployment, if autoscaling kicks in, this will
# change in the deployment status, meaning that if "terraform plan"
# runs and the deployment has autoscaled, there will be a diff on this
# field.
size = "32g"
# Optional
autoscale {
# Optional: Default value "memory"
max_size_resource = "memory"
# Required
max_size = "128g"
}
}
topology {
id = "warm"
# Optional: Default value "memory"
size_resource = "memory"
# Current for the deployment, if autoscaling kicks in, this will
# change in the deployment status, meaning that if "terraform plan"
# runs and the deployment has autoscaled, there will be a diff on this
# field.
size = "64g"
# Optional
autoscale {
# Optional: Default value "memory"
max_size_resource = "memory"
# Required
max_size = "128g"
}
}
topology {
id = "cold"
# Optional: Default value "memory"
size_resource = "memory"
# Current for the deployment, if autoscaling kicks in, this will
# change in the deployment status, meaning that if "terraform plan"
# runs and the deployment has autoscaled, there will be a diff on this
# field.
size = "128g"
autoscale {
# Optional: Default value "memory"
max_size_resource = "memory"
# Required
max_size = "256g"
}
}
topology {
id = "ml"
# Optional
autoscale {
# Optional: Default value "memory"
max_resource = "memory"
# Required
max_size = "128g"
# For Phase 1 only ML supports autoscale_min. This is validated by the API.
# Optional: Default value "memory"
min_resource = "memory"
# Required
min_size = "0g"
}
}
}
kibana {}
} |
Please ignore this comment, I saw something that wasn't there. (Friday brain 😅 ) |
Overview
Add support for the new autoscaling feature. API Ref.
Possible Implementation
The text was updated successfully, but these errors were encountered: