-
Notifications
You must be signed in to change notification settings - Fork 9.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
r/aws_backup_plan: Adding resource for managing AWS Backup plans #7350
Changes from 13 commits
d98fd21
912f5ea
5854329
0acabf2
2fc7848
016df03
716fd40
fe9a8fe
e709860
1343426
087dc71
2e6be1b
3fd0a66
f5cb223
7d02977
9562b2e
9c2bf32
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
Original file line number | Diff line number | Diff line change | ||||||
---|---|---|---|---|---|---|---|---|
@@ -0,0 +1,311 @@ | ||||||||
package aws | ||||||||
|
||||||||
import ( | ||||||||
"bytes" | ||||||||
"fmt" | ||||||||
|
||||||||
"github.com/aws/aws-sdk-go/aws" | ||||||||
"github.com/aws/aws-sdk-go/service/backup" | ||||||||
"github.com/hashicorp/terraform/helper/hashcode" | ||||||||
"github.com/hashicorp/terraform/helper/schema" | ||||||||
) | ||||||||
|
||||||||
func resourceAwsBackupPlan() *schema.Resource { | ||||||||
return &schema.Resource{ | ||||||||
Create: resourceAwsBackupPlanCreate, | ||||||||
Read: resourceAwsBackupPlanRead, | ||||||||
Update: resourceAwsBackupPlanUpdate, | ||||||||
Delete: resourceAwsBackupPlanDelete, | ||||||||
|
||||||||
Schema: map[string]*schema.Schema{ | ||||||||
"name": { | ||||||||
Type: schema.TypeString, | ||||||||
Required: true, | ||||||||
ForceNew: true, | ||||||||
}, | ||||||||
"rule": { | ||||||||
Type: schema.TypeSet, | ||||||||
Required: true, | ||||||||
Elem: &schema.Resource{ | ||||||||
Schema: map[string]*schema.Schema{ | ||||||||
"rule_name": { | ||||||||
Type: schema.TypeString, | ||||||||
Required: true, | ||||||||
}, | ||||||||
"target_vault_name": { | ||||||||
Type: schema.TypeString, | ||||||||
Required: true, | ||||||||
}, | ||||||||
"schedule": { | ||||||||
Type: schema.TypeString, | ||||||||
Optional: true, | ||||||||
}, | ||||||||
"start_window": { | ||||||||
Type: schema.TypeInt, | ||||||||
Optional: true, | ||||||||
Default: 60, | ||||||||
}, | ||||||||
"completion_window": { | ||||||||
Type: schema.TypeInt, | ||||||||
Optional: true, | ||||||||
Default: 180, | ||||||||
}, | ||||||||
"lifecycle": { | ||||||||
Type: schema.TypeMap, | ||||||||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. The Terraform Provider SDK does not currently support TypeMap configuration blocks. Did you mean
Suggested change
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Is this something that has been depreciated? I seem to see TypeMap used elsewhere in this provider. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. A In Terraform 0.12 core, we can start supporting a my_configuration_block "key1" {
child_argument = ""
} However its implementation is still in its design phases and is not present in the Terraform Provider SDK. See also: https://github.com/hashicorp/terraform/issues/19749#issuecomment-450256714 [1]: The underlying type system in Terraform 0.12 removes this restriction. When the Terraform Provider SDK supports dynamic elements in a future update and we upgrade to that version of the Terraform Provider SDK (requires removing Terraform 0.11 support in a later major version of the Terraform AWS Provider) we can support multiple value types in There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Oh wow, TIL. Thanks for the explanation! 😁 |
||||||||
Optional: true, | ||||||||
Elem: &schema.Resource{ | ||||||||
Schema: map[string]*schema.Schema{ | ||||||||
"cold_storage_after": { | ||||||||
Type: schema.TypeInt, | ||||||||
Optional: true, | ||||||||
}, | ||||||||
"delete_after": { | ||||||||
Type: schema.TypeInt, | ||||||||
Optional: true, | ||||||||
}, | ||||||||
}, | ||||||||
}, | ||||||||
}, | ||||||||
"recovery_point_tags": { | ||||||||
Type: schema.TypeMap, | ||||||||
Optional: true, | ||||||||
Elem: &schema.Schema{Type: schema.TypeString}, | ||||||||
}, | ||||||||
}, | ||||||||
}, | ||||||||
Set: resourceAwsPlanRuleHash, | ||||||||
}, | ||||||||
"tags": { | ||||||||
Type: schema.TypeMap, | ||||||||
Optional: true, | ||||||||
ForceNew: true, | ||||||||
Elem: &schema.Schema{Type: schema.TypeString}, | ||||||||
}, | ||||||||
"arn": { | ||||||||
Type: schema.TypeString, | ||||||||
Computed: true, | ||||||||
}, | ||||||||
"version": { | ||||||||
Type: schema.TypeString, | ||||||||
Computed: true, | ||||||||
}, | ||||||||
}, | ||||||||
} | ||||||||
} | ||||||||
|
||||||||
func resourceAwsBackupPlanCreate(d *schema.ResourceData, meta interface{}) error { | ||||||||
conn := meta.(*AWSClient).backupconn | ||||||||
|
||||||||
plan := &backup.PlanInput{ | ||||||||
BackupPlanName: aws.String(d.Get("name").(string)), | ||||||||
} | ||||||||
|
||||||||
rules := gatherPlanRules(d) | ||||||||
|
||||||||
plan.Rules = rules | ||||||||
|
||||||||
input := &backup.CreateBackupPlanInput{ | ||||||||
BackupPlan: plan, | ||||||||
} | ||||||||
|
||||||||
if v, ok := d.GetOk("tags"); ok { | ||||||||
input.BackupPlanTags = v.(map[string]*string) | ||||||||
} | ||||||||
|
||||||||
resp, err := conn.CreateBackupPlan(input) | ||||||||
if err != nil { | ||||||||
return err | ||||||||
slapula marked this conversation as resolved.
Show resolved
Hide resolved
|
||||||||
} | ||||||||
|
||||||||
d.SetId(*resp.BackupPlanId) | ||||||||
|
||||||||
return resourceAwsBackupPlanRead(d, meta) | ||||||||
} | ||||||||
|
||||||||
func resourceAwsBackupPlanRead(d *schema.ResourceData, meta interface{}) error { | ||||||||
conn := meta.(*AWSClient).backupconn | ||||||||
|
||||||||
input := &backup.GetBackupPlanInput{ | ||||||||
BackupPlanId: aws.String(d.Id()), | ||||||||
} | ||||||||
|
||||||||
resp, err := conn.GetBackupPlan(input) | ||||||||
if err != nil { | ||||||||
slapula marked this conversation as resolved.
Show resolved
Hide resolved
|
||||||||
return err | ||||||||
slapula marked this conversation as resolved.
Show resolved
Hide resolved
|
||||||||
} | ||||||||
|
||||||||
rule := &schema.Set{F: resourceAwsPlanRuleHash} | ||||||||
|
||||||||
for _, r := range resp.BackupPlan.Rules { | ||||||||
m := make(map[string]interface{}) | ||||||||
|
||||||||
if r.CompletionWindowMinutes != nil { | ||||||||
slapula marked this conversation as resolved.
Show resolved
Hide resolved
|
||||||||
m["completion_window"] = *r.CompletionWindowMinutes | ||||||||
} | ||||||||
if r.Lifecycle != nil { | ||||||||
l := map[string]int64{} | ||||||||
if r.Lifecycle.DeleteAfterDays != nil { | ||||||||
l["delete_after"] = *r.Lifecycle.DeleteAfterDays | ||||||||
} | ||||||||
if r.Lifecycle.MoveToColdStorageAfterDays != nil { | ||||||||
l["cold_storage_after"] = *r.Lifecycle.MoveToColdStorageAfterDays | ||||||||
} | ||||||||
m["lifecycle"] = l | ||||||||
} | ||||||||
if r.RecoveryPointTags != nil { | ||||||||
m["recovery_point_tags"] = r.RecoveryPointTags | ||||||||
} | ||||||||
m["rule_name"] = *r.RuleName | ||||||||
if r.ScheduleExpression != nil { | ||||||||
m["schedule"] = *r.ScheduleExpression | ||||||||
} | ||||||||
if r.StartWindowMinutes != nil { | ||||||||
m["start_window"] = *r.StartWindowMinutes | ||||||||
} | ||||||||
m["target_vault_name"] = *r.TargetBackupVaultName | ||||||||
|
||||||||
rule.Add(m) | ||||||||
} | ||||||||
d.Set("rule", rule) | ||||||||
slapula marked this conversation as resolved.
Show resolved
Hide resolved
|
||||||||
|
||||||||
d.Set("arn", resp.BackupPlanArn) | ||||||||
d.Set("version", resp.VersionId) | ||||||||
|
||||||||
return nil | ||||||||
} | ||||||||
|
||||||||
func resourceAwsBackupPlanUpdate(d *schema.ResourceData, meta interface{}) error { | ||||||||
conn := meta.(*AWSClient).backupconn | ||||||||
|
||||||||
plan := &backup.PlanInput{ | ||||||||
BackupPlanName: aws.String(d.Get("name").(string)), | ||||||||
} | ||||||||
|
||||||||
rules := gatherPlanRules(d) | ||||||||
|
||||||||
plan.Rules = rules | ||||||||
|
||||||||
input := &backup.UpdateBackupPlanInput{ | ||||||||
BackupPlanId: aws.String(d.Id()), | ||||||||
BackupPlan: plan, | ||||||||
} | ||||||||
|
||||||||
_, err := conn.UpdateBackupPlan(input) | ||||||||
if err != nil { | ||||||||
return err | ||||||||
slapula marked this conversation as resolved.
Show resolved
Hide resolved
|
||||||||
} | ||||||||
|
||||||||
return resourceAwsBackupPlanRead(d, meta) | ||||||||
} | ||||||||
|
||||||||
func resourceAwsBackupPlanDelete(d *schema.ResourceData, meta interface{}) error { | ||||||||
conn := meta.(*AWSClient).backupconn | ||||||||
|
||||||||
input := &backup.DeleteBackupPlanInput{ | ||||||||
BackupPlanId: aws.String(d.Id()), | ||||||||
} | ||||||||
|
||||||||
_, err := conn.DeleteBackupPlan(input) | ||||||||
if err != nil { | ||||||||
slapula marked this conversation as resolved.
Show resolved
Hide resolved
|
||||||||
return err | ||||||||
slapula marked this conversation as resolved.
Show resolved
Hide resolved
|
||||||||
} | ||||||||
|
||||||||
return nil | ||||||||
} | ||||||||
|
||||||||
func gatherPlanRules(d *schema.ResourceData) []*backup.RuleInput { | ||||||||
slapula marked this conversation as resolved.
Show resolved
Hide resolved
|
||||||||
rules := []*backup.RuleInput{} | ||||||||
planRules := d.Get("rule").(*schema.Set).List() | ||||||||
|
||||||||
for _, i := range planRules { | ||||||||
item := i.(map[string]interface{}) | ||||||||
lifecycle := i.(map[string]interface{})["lifecycle"].(map[string]interface{}) | ||||||||
slapula marked this conversation as resolved.
Show resolved
Hide resolved
|
||||||||
rule := &backup.RuleInput{} | ||||||||
|
||||||||
if item["rule_name"] != "" { | ||||||||
rule.RuleName = aws.String(item["rule_name"].(string)) | ||||||||
} | ||||||||
if item["target_vault_name"] != "" { | ||||||||
rule.TargetBackupVaultName = aws.String(item["target_vault_name"].(string)) | ||||||||
} | ||||||||
if item["schedule"] != "" { | ||||||||
rule.ScheduleExpression = aws.String(item["schedule"].(string)) | ||||||||
} | ||||||||
if item["start_window"] != nil { | ||||||||
rule.StartWindowMinutes = aws.Int64(int64(item["start_window"].(int))) | ||||||||
} | ||||||||
if item["completion_window"] != nil { | ||||||||
rule.CompletionWindowMinutes = aws.Int64(int64(item["completion_window"].(int))) | ||||||||
} | ||||||||
if lifecycle["delete_after"] != nil { | ||||||||
rule.Lifecycle.DeleteAfterDays = aws.Int64(int64(lifecycle["delete_after"].(int))) | ||||||||
} | ||||||||
if lifecycle["cold_storage_after"] != nil { | ||||||||
rule.Lifecycle.MoveToColdStorageAfterDays = aws.Int64(int64(lifecycle["cold_storage_after"].(int))) | ||||||||
} | ||||||||
if item["recovery_point_tags"] != nil { | ||||||||
tagsUnwrapped := make(map[string]*string) | ||||||||
for key, value := range item["recovery_point_tags"].(map[string]interface{}) { | ||||||||
tagsUnwrapped[key] = aws.String(value.(string)) | ||||||||
} | ||||||||
rule.RecoveryPointTags = tagsUnwrapped | ||||||||
} | ||||||||
|
||||||||
rules = append(rules, rule) | ||||||||
} | ||||||||
|
||||||||
return rules | ||||||||
} | ||||||||
|
||||||||
func resourceAwsPlanRuleHash(v interface{}) int { | ||||||||
var buf bytes.Buffer | ||||||||
m := v.(map[string]interface{}) | ||||||||
|
||||||||
if v.(map[string]interface{})["lifecycle"] != nil { | ||||||||
l := v.(map[string]interface{})["lifecycle"].(map[string]interface{}) | ||||||||
if w, ok := l["delete_after"]; ok { | ||||||||
buf.WriteString(fmt.Sprintf("%d-", w.(int))) | ||||||||
} | ||||||||
|
||||||||
if w, ok := l["cold_storage_after"]; ok { | ||||||||
buf.WriteString(fmt.Sprintf("%d-", w.(int))) | ||||||||
} | ||||||||
} | ||||||||
|
||||||||
if v, ok := m["completion_window"]; ok { | ||||||||
buf.WriteString(fmt.Sprintf("%d-", v.(interface{}))) | ||||||||
} | ||||||||
|
||||||||
if v, ok := m["recovery_point_tags"]; ok { | ||||||||
switch t := v.(type) { | ||||||||
case map[string]*string: | ||||||||
buf.WriteString(fmt.Sprintf("%v-", v.(map[string]*string))) | ||||||||
case map[string]interface{}: | ||||||||
tagsUnwrapped := make(map[string]*string) | ||||||||
for key, value := range m["recovery_point_tags"].(map[string]interface{}) { | ||||||||
tagsUnwrapped[key] = aws.String(value.(string)) | ||||||||
} | ||||||||
buf.WriteString(fmt.Sprintf("%v-", tagsUnwrapped)) | ||||||||
default: | ||||||||
fmt.Println("invalid type: ", t) | ||||||||
} | ||||||||
} | ||||||||
|
||||||||
if v, ok := m["rule_name"]; ok { | ||||||||
buf.WriteString(fmt.Sprintf("%s-", v.(string))) | ||||||||
} | ||||||||
|
||||||||
if v, ok := m["schedule"]; ok { | ||||||||
buf.WriteString(fmt.Sprintf("%s-", v.(string))) | ||||||||
} | ||||||||
|
||||||||
if v, ok := m["start_window"]; ok { | ||||||||
buf.WriteString(fmt.Sprintf("%d-", v.(interface{}))) | ||||||||
} | ||||||||
|
||||||||
if v, ok := m["target_vault_name"]; ok { | ||||||||
buf.WriteString(fmt.Sprintf("%s-", v.(string))) | ||||||||
} | ||||||||
|
||||||||
return hashcode.String(buf.String()) | ||||||||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can you please add an acceptance test that exercises this configuration block? Thanks! I think line 156 should be a something like
m["lifecycle"] = []interface{}{l}
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
One question that's been nagging me as I've been working on these resources... How do I test for attributes in a list where the identifier appears to be a random number instead of an ordered number? For example:
How do I test the attributes of
rule
when160206105
changes with each test run?There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Unfortunately in these cases, the randomization gets complicated for the acceptance testing. You'll probably want to make the rule names static (unless the randomization there actually matters). While it is technically possible you could try to calculate the hash of the
TypeSet
, once you're in this situation it is generally easier to take the resultant API object returned by theExists
TestCheckFunc
and check the values within it directly, e.g.As an added assurance to the above, acceptance tests that implement
TestStep
s withImportStateVerify: true
will automatically ensure that the resource read function is correctly able to re-read the API object and generate the same Terraform state.There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ah ok, that makes sense. There really isn't a requirement for the plan to be unique other than to help me avoid running into dangling resources while testing. For this test, I'm going to opt to not make it use the random
int
so I can reliably test for the existence of the lifecycle.