-
Notifications
You must be signed in to change notification settings - Fork 389
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Feature request: Please support parent_id on recurring Downtimes #109
Comments
Added |
|
@vanvlack thanks very much for getting that piece done. I don't know what code is required to add support in this provider. I believe if the provider was able to compare a new |
I wanted to add some color to this issue. We've been trying to get our entire monitoring infrastructure defined in Terraform, and recurring downtimes are the only resource we've been unable to manage in Terraform. We use recurring downtimes almost exclusively, for e.g. anomaly monitors that get noisy during off hours. Because the recurring downtime model changes the ID on every reoccurrence, it breaks Terraform's model that re-applying Terraform configuration without any changes should have no backend resource changes. This was quite frustrating as from Terraform's perspective, it thinks the originally created downtime just "disappears" and re-applying the same configuration causes a 400 error (since it tries to create a new reoccurring downtime in the past, since you have to specify I know I'm not being super helpful by describing a problem we already know exists, but it might be worth updating the Terraform documentation to reflect that reoccurring downtimes don't work as expected from Terraform's perspective until this issue is resolved (I also think it's closer to a bug than a feature request, IMHO). Happy to provide additional insight from our experience but I suspect many people have taken a similar path to us and just reverted to managing downtimes through DataDog's UI. |
Hi, I've looked into implementing this using the downtime Steps to reproduce:
I can think about at least two options that could let this work:
|
FWIW, I pushed a POC here: https://github.com/pdecat/terraform-provider-datadog/tree/recurrent_downtimes (https://github.com/pdecat/terraform-provider-datadog/commit/44f4ecd27b36371e9ca4cb8f0855d90c2d1a3947) Applied this yesterday (Monday 2019/09/16):
Today's plan with 2.4.0 (Tuesday 2019/09/17):
Update: And as expected, the day after (Wednesday 2019/09/18), this no longer works because the first child of the original downtime was deleted:
|
@pdecat any reason we need to know that original edit: realizing this doesnt actually help us, as changes will need to somehow be tied to the new rotated monitors... |
The It might need a request to Datadog to support something like a I am happy to put in a support query to see what they say |
👋 this is something we’re looking to address in the nearish future. Among some other changes making downtimes (mostly) immutable (to address other edge cases people have run into). Thank you for this helpful feedback 😄 |
@platinummonkey that's great news. Please keep us updated on any progress 🍻 |
@platinummonkey has there been any progress towards making recurring downtimes manageable in Terraform? |
@platinummonkey Just wondering if there has been any progress on this yet? |
We're tracking this internally and have some worked queued that should address this. |
Hello, Thanks for your patience on this. You have to update your terraform provider to the version 3.10 to benefit from the fix. You can find here the PR that addresses this issue and which contains a very detailed description about the change made and the remaining caveat that we are still working to improve. I'll go ahead and resolve this issue but feel free to let us know if you have any question or feedback. Thanks again for reporting this issue and helping us improve the terraform provider to better manage Downtimes. |
Our team is trying to use Terraform to manage a scheduled monthly downtime for Datadog. It occurs on the first day of the month for one hour.
I imported the existing downtime monitor to avoid manually adding the
start
andend
values, and it worked fine until the next downtime was completed and theid
value changed.I asked about this behaviour in the Datadog Slack channel and I was told this is the way the downtime monitors work. The first
id
value runs, when it is complete a newid
value is created with theparent_id
value set to the originalid
value.If the Datadog provider can process the extra pieces of information, the downtimes would not appear in the plan as a creation. It would hopefully manage the
id
value transparently in the state file somehow.Terraform Version
Terraform v0.11.10
Affected Resource(s)
datadog_downtime
Terraform Configuration Files
Expected Behavior
terraform plan
will look for updates to the downtime_monitor but will not consider it an addition.Actual Behavior
Steps to Reproduce
Please list the steps required to reproduce the issue, for example:
terraform init
terraform plan
References
The API code examples show the
parent_id
field, it is not mentioned in the attached documentation.https://docs.datadoghq.com/api/?lang=python#schedule-monitor-downtime
The text was updated successfully, but these errors were encountered: