-
Notifications
You must be signed in to change notification settings - Fork 9.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
r/aws_backup_plan: Adding resource for managing AWS Backup plans #7350
Conversation
This is still a WIP. This PR's test rely on #7207. I also need to write up a few more tests before this is ready for final review. |
I think this ready for review now |
Can you please rebase this with master please? Thanks so much. |
@bflad Done |
@bflad Really need this resource added. Can it get some ❤️ ? |
I get a type-related panic when trying to use Lifecycle. `resource "aws_backup_plan" "my_backup_plan" { rule {
} Passes a terraform plan, but apply crashes:
|
@mooseracer thanks for catching that! I'm looking into that right now. Should be an easy fix. |
@mooseracer I think I fixed your issue. Could you try that again? |
@slapula Sorry, tried it again but it's throwing the same crash errors. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hi @slapula 👋 Thanks for submitting this. Some initial feedback provided below. Please reach out with any questions or if you do not have time to address them.
Regarding the trouble folks are seeing with lifecycle
, its because the schema is incorrectly defined and is not being error checked in the Read function as noted below.
aws/resource_aws_backup_plan.go
Outdated
Default: 180, | ||
}, | ||
"lifecycle": { | ||
Type: schema.TypeMap, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The Terraform Provider SDK does not currently support TypeMap configuration blocks. Did you mean TypeList
and MaxItems: 1
?
Type: schema.TypeMap, | |
Type: schema.TypeList, | |
MaxItems: 1, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is this something that has been depreciated? I seem to see TypeMap used elsewhere in this provider.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
A TypeMap
attribute can only currently[1] be used with a single primative type element (TypeString
, TypeInt
, or TypeFloat
). Any Elem
usage outside of that with TypeMap
is an implementation bug and/or a quirk that just happens to work in the current Terraform Provider SDK and the usage may not be supported in the future.
In Terraform 0.12 core, we can start supporting a TypeMap
configuration block syntax, such as:
my_configuration_block "key1" {
child_argument = ""
}
However its implementation is still in its design phases and is not present in the Terraform Provider SDK. See also: https://github.com/hashicorp/terraform/issues/19749#issuecomment-450256714
[1]: The underlying type system in Terraform 0.12 removes this restriction. When the Terraform Provider SDK supports dynamic elements in a future update and we upgrade to that version of the Terraform Provider SDK (requires removing Terraform 0.11 support in a later major version of the Terraform AWS Provider) we can support multiple value types in TypeMap
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Oh wow, TIL. Thanks for the explanation! 😁
schedule = "cron(0 12 * * ? *)" | ||
} | ||
} | ||
`, randInt, randInt, randInt) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You can avoid fmt
argument sprawl by using indexed formatter references, e.g.
fmt.Sprintf(`
thing1 = %[1]d
thing2 = %[1]d
`, randInt)
@bflad Thanks for the review! I believe I've addressed all of your comments:
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think two more updates with testing and this should be good to go. Thanks so much for the work here, this functionality will make a bunch of folks happy.
Optional: true, | ||
Default: 180, | ||
}, | ||
"lifecycle": { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can you please add an acceptance test that exercises this configuration block? Thanks! I think line 156 should be a something like m["lifecycle"] = []interface{}{l}
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
One question that's been nagging me as I've been working on these resources... How do I test for attributes in a list where the identifier appears to be a random number instead of an ordered number? For example:
rule.# = 1
rule.160206105.completion_window = 180
rule.160206105.lifecycle.# = 1
rule.160206105.lifecycle.0.cold_storage_after = 30
rule.160206105.lifecycle.0.delete_after = 160
rule.160206105.recovery_point_tags.% = 0
rule.160206105.rule_name = tf_acc_test_backup_rule_7563947093289689272
rule.160206105.schedule = cron(0 12 * * ? *)
rule.160206105.start_window = 60
rule.160206105.target_vault_name = tf_acc_test_backup_vault_7563947093289689272
How do I test the attributes of rule
when 160206105
changes with each test run?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Unfortunately in these cases, the randomization gets complicated for the acceptance testing. You'll probably want to make the rule names static (unless the randomization there actually matters). While it is technically possible you could try to calculate the hash of the TypeSet
, once you're in this situation it is generally easier to take the resultant API object returned by the Exists
TestCheckFunc
and check the values within it directly, e.g.
func testAccCheckBackupPlanRules(plan *backup.GetBackupPlanOutput, expectedRules []*backup.Rule) resource.TestCheckFunc {
return func(s *terraform.State) error {
if a, e := len(plan.Plan.Rules), len(expectedRules); a != e {
return fmt.Errorf("expected %d Backup Plan Rules, got: %d", e, a)
}
for _, expectedRule := range expectedRules {
for _, rule := range plan.Plan.Rules {
if aws.StringValue(rule.RuleName) != aws.StringValue(expectedRule.RuleName) {
continue
}
ruleName := aws.StringValue(rule.RuleName)
if a, e := aws.StringValue(rule.Schedule), aws.StringValue(expectedRule.Schedule); a != e {
return fmt.Errorf("expected Backup Plan Rule (%s) schedule to match (%s), got: %s", ruleName, e, a)
}
// Check other various attributes as appropriate
break
}
}
return nil
}
}
// In a TestStep, filling in rule details as necessary
testAccCheckAwsBackupPlanExists("aws_backup_plan.test", &plan),
testAccCheckBackupPlanRules(&plan, []*backup.Rule{
{
RuleName: aws.String("test"),
Schedule: aws.String("cron(0 12 * * ? *)"),
},
})
As an added assurance to the above, acceptance tests that implement TestStep
s with ImportStateVerify: true
will automatically ensure that the resource read function is correctly able to re-read the API object and generate the same Terraform state.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ah ok, that makes sense. There really isn't a requirement for the plan to be unique other than to help me avoid running into dangling resources while testing. For this test, I'm going to opt to not make it use the random int
so I can reliably test for the existence of the lifecycle.
@bflad I believe I've addressed your last round of comments. I've implemented the acceptance test for lifecycle blocks. I've also added the missing read/update functionality for |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks so much, @slapula 🚀 (TravisCI failures related to #7993)
--- PASS: TestAccAwsBackupPlan_disappears (11.86s)
--- PASS: TestAccAwsBackupPlan_basic (12.99s)
--- PASS: TestAccAwsBackupPlan_withRules (13.14s)
--- PASS: TestAccAwsBackupPlan_withLifecycle (13.26s)
--- PASS: TestAccAwsBackupPlan_withRuleRemove (20.08s)
--- PASS: TestAccAwsBackupPlan_withRuleAdd (20.34s)
--- PASS: TestAccAwsBackupPlan_withTags (31.76s)
Thanks for all the work on this! I'd just like to report a logical bug with
the Lifecycle -- it should be able to accept values of "delete_after" when
"cold_storage_after" is left unspecified. Right now it's always passing
"cold_storage_after" to the API and it gets validated against their logic:
lifecycle = {
delete_after = 15
}
Error in rule backup-plan-rule : Invalid lifecycle. DeleteAfterDays cannot
be less than 90 days apart from MoveToColdStorageAfterDays
status code: 400
This represents a simple scenario where you don't want to transition to
cold storage and only keep X days of snapshots. It's supported by the API;
you can set it in the GUI and the following JSON is accepted by AWS CLI:
`{
"BackupPlan": {
"BackupPlanName": "myplan",
"Rules": [
{
"RuleName": "myrule",
"TargetBackupVaultName": "Default",
"ScheduleExpression": "cron(0 10 * * ? *)",
"Lifecycle": {
"DeleteAfterDays": 15
}
}
]
}
}`
|
@mooseracer Can you submit this as an issue? I noticed some weirdness with these API calls when I was working on this and if I remember correctly AWS wasn't letting me submit requests with any empty or zero-valued parameters. As you can see, I made them both required to work around the issue. I'll need to take another look to confirm all of this and think of a better solution. |
This has been released in version 2.3.0 of the Terraform AWS provider. Please see the Terraform documentation on provider versioning or reach out if you need any assistance upgrading. |
I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues. If you feel this issue should be reopened, we encourage creating a new issue linking back to this one for added context. Thanks! |
Reference: #7166
Changes proposed in this pull request:
aws_backup_plan
Output from acceptance testing: