Skip to content

Commit

Permalink
[DOM-46192] Adding required labels for cross account backups (#142)
Browse files Browse the repository at this point in the history
* [DOM-46192] Adding required labels for cross account backups

* Adding link in readme

* More details

* PR Feedback

* Adding s3 logs

* Fix variable issue

* Docs

* Docs

* README

* README

* Update README.md

Co-authored-by: Steven Davidovitz <[email protected]>

* Feedback

* Feedback

---------

Co-authored-by: Steven Davidovitz <[email protected]>
  • Loading branch information
ldebello-ddl and steved authored Sep 29, 2023
1 parent e092cb5 commit 6d813a3
Show file tree
Hide file tree
Showing 8 changed files with 50 additions and 5 deletions.
34 changes: 34 additions & 0 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -281,3 +281,37 @@ Run the command below to generate a list of infrastructure values. These values
```

This command will output a set of key-value pairs, extracted from the infrastructure setup, that can be used as inputs in the domino.yaml configuration file.


## Domino Backups
If you would like to increase the safety of data stored in AWS S3 and EFS by backing them up into another account (Accounts under same AWS Organization), use the [terraform-aws-domino-backup](https://github.com/dominodatalab/terraform-aws-domino-backup) module:

1. Define another provider for the backup account in `main.tf` for infra module.

Location
```bash
domino-deploy
├── terraform
│   ├── infra
│   │   ├── main.tf
```

Content
```
provider "aws" {
alias = "domino-backup"
region = <<Backup Account Region>>
}
```

2. Add the following content

```
module "backups" {
count = 1
source = "github.com/dominodatalab/terraform-aws-domino-backup.git?ref=v1.0.10"
providers = {
aws.dst = aws.domino-backup
}
}
```
2 changes: 1 addition & 1 deletion modules/infra/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -64,7 +64,7 @@
| <a name="input_region"></a> [region](#input\_region) | AWS region for the deployment | `string` | n/a | yes |
| <a name="input_route53_hosted_zone_name"></a> [route53\_hosted\_zone\_name](#input\_route53\_hosted\_zone\_name) | Optional hosted zone for External DNS zone. | `string` | `null` | no |
| <a name="input_ssh_pvt_key_path"></a> [ssh\_pvt\_key\_path](#input\_ssh\_pvt\_key\_path) | SSH private key filepath. | `string` | n/a | yes |
| <a name="input_storage"></a> [storage](#input\_storage) | storage = {<br> efs = {<br> access\_point\_path = Filesystem path for efs.<br> backup\_vault = {<br> create = Create backup vault for EFS toggle.<br> force\_destroy = Toggle to allow automatic destruction of all backups when destroying.<br> backup = {<br> schedule = Cron-style schedule for EFS backup vault (default: once a day at 12pm).<br> cold\_storage\_after = Move backup data to cold storage after this many days.<br> delete\_after = Delete backup data after this many days.<br> }<br> }<br> }<br> s3 = {<br> force\_destroy\_on\_deletion = Toogle to allow recursive deletion of all objects in the s3 buckets. if 'false' terraform will NOT be able to delete non-empty buckets.<br> }<br> ecr = {<br> force\_destroy\_on\_deletion = Toogle to allow recursive deletion of all objects in the ECR repositories. if 'false' terraform will NOT be able to delete non-empty repositories.<br> }<br> }<br> } | <pre>object({<br> efs = optional(object({<br> access_point_path = optional(string, "/domino")<br> backup_vault = optional(object({<br> create = optional(bool, true)<br> force_destroy = optional(bool, true)<br> backup = optional(object({<br> schedule = optional(string, "0 12 * * ? *")<br> cold_storage_after = optional(number, 35)<br> delete_after = optional(number, 125)<br> }), {})<br> }), {})<br> }), {})<br> s3 = optional(object({<br> force_destroy_on_deletion = optional(bool, true)<br> }), {})<br> ecr = optional(object({<br> force_destroy_on_deletion = optional(bool, true)<br> }), {})<br> })</pre> | `{}` | no |
| <a name="input_storage"></a> [storage](#input\_storage) | storage = {<br> efs = {<br> access\_point\_path = Filesystem path for efs.<br> backup\_vault = {<br> create = Create backup vault for EFS toggle.<br> force\_destroy = Toggle to allow automatic destruction of all backups when destroying.<br> backup = {<br> schedule = Cron-style schedule for EFS backup vault (default: once a day at 12pm).<br> cold\_storage\_after = Move backup data to cold storage after this many days.<br> delete\_after = Delete backup data after this many days.<br> }<br> }<br> }<br> s3 = {<br> force\_destroy\_on\_deletion = Toogle to allow recursive deletion of all objects in the s3 buckets. if 'false' terraform will NOT be able to delete non-empty buckets.<br> }<br> ecr = {<br> force\_destroy\_on\_deletion = Toogle to allow recursive deletion of all objects in the ECR repositories. if 'false' terraform will NOT be able to delete non-empty repositories.<br> }<br> }<br> } | <pre>object({<br> efs = optional(object({<br> access_point_path = optional(string, "/domino")<br> backup_vault = optional(object({<br> create = optional(bool, true)<br> force_destroy = optional(bool, true)<br> backup = optional(object({<br> schedule = optional(string, "0 12 * * ? *")<br> cold_storage_after = optional(number, 35)<br> delete_after = optional(number, 125)<br> }), {})<br> }), {})<br> }), {})<br> s3 = optional(object({<br> force_destroy_on_deletion = optional(bool, true)<br> }), {})<br> ecr = optional(object({<br> force_destroy_on_deletion = optional(bool, true)<br> }), {}),<br> enable_remote_backup = optional(bool, false)<br> })</pre> | `{}` | no |
| <a name="input_tags"></a> [tags](#input\_tags) | Deployment tags. | `map(string)` | `{}` | no |

## Outputs
Expand Down
2 changes: 1 addition & 1 deletion modules/infra/submodules/storage/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -67,7 +67,7 @@ No modules.
| <a name="input_deploy_id"></a> [deploy\_id](#input\_deploy\_id) | Domino Deployment ID | `string` | n/a | yes |
| <a name="input_kms_info"></a> [kms\_info](#input\_kms\_info) | key\_id = KMS key id.<br> key\_arn = KMS key arn.<br> enabled = KMS key is enabled | <pre>object({<br> key_id = string<br> key_arn = string<br> enabled = bool<br> })</pre> | n/a | yes |
| <a name="input_network_info"></a> [network\_info](#input\_network\_info) | id = VPC ID.<br> subnets = {<br> public = List of public Subnets.<br> [{<br> name = Subnet name.<br> subnet\_id = Subnet ud<br> az = Subnet availability\_zone<br> az\_id = Subnet availability\_zone\_id<br> }]<br> private = List of private Subnets.<br> [{<br> name = Subnet name.<br> subnet\_id = Subnet ud<br> az = Subnet availability\_zone<br> az\_id = Subnet availability\_zone\_id<br> }]<br> pod = List of pod Subnets.<br> [{<br> name = Subnet name.<br> subnet\_id = Subnet ud<br> az = Subnet availability\_zone<br> az\_id = Subnet availability\_zone\_id<br> }]<br> } | <pre>object({<br> vpc_id = string<br> subnets = object({<br> public = optional(list(object({<br> name = string<br> subnet_id = string<br> az = string<br> az_id = string<br> })), [])<br> private = list(object({<br> name = string<br> subnet_id = string<br> az = string<br> az_id = string<br> }))<br> pod = optional(list(object({<br> name = string<br> subnet_id = string<br> az = string<br> az_id = string<br> })), [])<br> })<br> })</pre> | n/a | yes |
| <a name="input_storage"></a> [storage](#input\_storage) | storage = {<br> efs = {<br> access\_point\_path = Filesystem path for efs.<br> backup\_vault = {<br> create = Create backup vault for EFS toggle.<br> force\_destroy = Toggle to allow automatic destruction of all backups when destroying.<br> backup = {<br> schedule = Cron-style schedule for EFS backup vault (default: once a day at 12pm).<br> cold\_storage\_after = Move backup data to cold storage after this many days.<br> delete\_after = Delete backup data after this many days.<br> }<br> }<br> }<br> s3 = {<br> force\_destroy\_on\_deletion = Toogle to allow recursive deletion of all objects in the s3 buckets. if 'false' terraform will NOT be able to delete non-empty buckets.<br> }<br> ecr = {<br> force\_destroy\_on\_deletion = Toogle to allow recursive deletion of all objects in the ECR repositories. if 'false' terraform will NOT be able to delete non-empty repositories.<br> }<br> }<br> } | <pre>object({<br> efs = optional(object({<br> access_point_path = optional(string)<br> backup_vault = optional(object({<br> create = optional(bool)<br> force_destroy = optional(bool)<br> backup = optional(object({<br> schedule = optional(string)<br> cold_storage_after = optional(number)<br> delete_after = optional(number)<br> }))<br> }))<br> }))<br> s3 = optional(object({<br> force_destroy_on_deletion = optional(bool)<br> }))<br> ecr = optional(object({<br> force_destroy_on_deletion = optional(bool)<br> }))<br> })</pre> | n/a | yes |
| <a name="input_storage"></a> [storage](#input\_storage) | storage = {<br> efs = {<br> access\_point\_path = Filesystem path for efs.<br> backup\_vault = {<br> create = Create backup vault for EFS toggle.<br> force\_destroy = Toggle to allow automatic destruction of all backups when destroying.<br> backup = {<br> schedule = Cron-style schedule for EFS backup vault (default: once a day at 12pm).<br> cold\_storage\_after = Move backup data to cold storage after this many days.<br> delete\_after = Delete backup data after this many days.<br> }<br> }<br> }<br> s3 = {<br> force\_destroy\_on\_deletion = Toogle to allow recursive deletion of all objects in the s3 buckets. if 'false' terraform will NOT be able to delete non-empty buckets.<br> }<br> ecr = {<br> force\_destroy\_on\_deletion = Toogle to allow recursive deletion of all objects in the ECR repositories. if 'false' terraform will NOT be able to delete non-empty repositories.<br> }<br> enable\_remote\_backup = Enable tagging required for cross-account backups<br> }<br> } | <pre>object({<br> efs = optional(object({<br> access_point_path = optional(string)<br> backup_vault = optional(object({<br> create = optional(bool)<br> force_destroy = optional(bool)<br> backup = optional(object({<br> schedule = optional(string)<br> cold_storage_after = optional(number)<br> delete_after = optional(number)<br> }))<br> }))<br> }))<br> s3 = optional(object({<br> force_destroy_on_deletion = optional(bool)<br> }))<br> ecr = optional(object({<br> force_destroy_on_deletion = optional(bool)<br> }))<br> enable_remote_backup = optional(bool)<br> })</pre> | n/a | yes |

## Outputs

Expand Down
4 changes: 2 additions & 2 deletions modules/infra/submodules/storage/efs.tf
Original file line number Diff line number Diff line change
Expand Up @@ -5,9 +5,9 @@ resource "aws_efs_file_system" "eks" {
throughput_mode = "bursting"
kms_key_id = local.kms_key_arn

tags = {
tags = merge(local.backup_tagging, {
"Name" = var.deploy_id
}
})

lifecycle {
ignore_changes = [
Expand Down
4 changes: 4 additions & 0 deletions modules/infra/submodules/storage/main.tf
Original file line number Diff line number Diff line change
Expand Up @@ -38,4 +38,8 @@ locals {
arn = aws_s3_bucket.registry.arn
}
}

backup_tagging = var.storage.enable_remote_backup ? {
"backup_plan" = "cross-account"
} : {}
}
4 changes: 4 additions & 0 deletions modules/infra/submodules/storage/s3.tf
Original file line number Diff line number Diff line change
Expand Up @@ -7,6 +7,7 @@ resource "aws_s3_bucket" "backups" {
force_destroy = var.storage.s3.force_destroy_on_deletion
object_lock_enabled = false

tags = local.backup_tagging
}

data "aws_iam_policy_document" "backups" {
Expand Down Expand Up @@ -73,6 +74,8 @@ resource "aws_s3_bucket" "blobs" {
bucket = "${var.deploy_id}-blobs"
force_destroy = var.storage.s3.force_destroy_on_deletion
object_lock_enabled = false

tags = local.backup_tagging
}

data "aws_iam_policy_document" "blobs" {
Expand Down Expand Up @@ -142,6 +145,7 @@ resource "aws_s3_bucket" "logs" {
force_destroy = var.storage.s3.force_destroy_on_deletion
object_lock_enabled = false

tags = local.backup_tagging
}

data "aws_iam_policy_document" "logs" {
Expand Down
2 changes: 2 additions & 0 deletions modules/infra/submodules/storage/variables.tf
Original file line number Diff line number Diff line change
Expand Up @@ -42,6 +42,7 @@ variable "storage" {
ecr = {
force_destroy_on_deletion = Toogle to allow recursive deletion of all objects in the ECR repositories. if 'false' terraform will NOT be able to delete non-empty repositories.
}
enable_remote_backup = Enable tagging required for cross-account backups
}
}
EOF
Expand All @@ -64,6 +65,7 @@ variable "storage" {
ecr = optional(object({
force_destroy_on_deletion = optional(bool)
}))
enable_remote_backup = optional(bool)
})
}

Expand Down
3 changes: 2 additions & 1 deletion modules/infra/variables.tf
Original file line number Diff line number Diff line change
Expand Up @@ -356,7 +356,8 @@ variable "storage" {
}), {})
ecr = optional(object({
force_destroy_on_deletion = optional(bool, true)
}), {})
}), {}),
enable_remote_backup = optional(bool, false)
})

default = {}
Expand Down

0 comments on commit 6d813a3

Please sign in to comment.