k9 Security's terraform-aws-s3-bucket helps you protect data by creating an AWS S3 bucket with safe defaults and a least-privilege bucket policy built on the k9 access capability model.
There are several problems engineers must solve when securing data in an S3 bucket, especially when sharing an AWS account. To secure your data, you'll need to:
- configure several distinct S3 resources: the bucket, the bucket policy, 'block public access' configurations
- create security policies that allow access by authorized principals and denies everyone else
- adjust standard Terraform resource configurations which generally mirror AWS API defaults to current best practice
- capture enough context to scale security, governance, risk, and compliance activities efficiently
Configuring your intended access can be especially difficult. First there are complicated interactions between IAM and resource policies. Second, IAM policies without resource conditions (e.g. AWS Managed Policies) overprovision access to all resources of that API resource type. Learn more about why writing these security policies is hard in this blog post or video. A primary access control goal is to prevent an exploit of one application leading to the breach of another application's data, e.g. a firewall role being used to steal credit application data.
This module addresses these problems by helping you declare your intent and let the module worry about the details. Specify context about your use case and intended access, then the module will:
- create a bucket named using your context
- generate a least privilege bucket policy
- configure encryption
- tag resources according to the k9 Security tagging model
- configure access logging
- and more
The root of this repository contains a Terraform module that manages an AWS S3 bucket (S3 bucket API).
The k9 S3 bucket module allows you to define who should have access to the bucket in terms of k9's
access capability model. Instead of
writing a least privilege access policy directly in terms of API actions like s3:GetObject
, you declare
who should be able to read-data
. This module supports the following access capabilities:
administer-resource
read-config
read-data
write-data
delete-data
First, define who should access to the bucket as lists of AWS principal IDs.
The most common principals you will use are AWS IAM user and role ARNs such as arn:aws:iam::12345678910:role/appA
.
Consider using locals
to help document intent, keep lists synchronized, and reduce duplication.
# Define which principals may access the bucket and what capabilities they should have
# k9 access capabilities are defined at https://k9security.io/docs/k9-access-capability-model/
locals {
administrator_arns = [
"arn:aws:iam::12345678910:user/ci"
, "arn:aws:iam::12345678910:user/person1"
]
read_config_arns = concat(local.administrator_arns,
["arn:aws:iam::12345678910:role/k9-auditor"])
read_data_arns = [
"arn:aws:iam::12345678910:user/person1",
"arn:aws:iam::12345678910:role/appA",
]
write_data_arns = local.read_data_arns
}
Now instantiate the module with a definition like this:
module "s3_bucket" {
source = "[email protected]:k9securityio/terraform-aws-s3-bucket.git"
# the logical name for the use case, e.g. docs, reports, media, backups
logical_name = "docs"
logging_target_bucket = "name of the bucket to log to, e.g. my-logs-bucket"
org = "someorg"
owner = "someowner"
env = "dev"
app = "someapi"
allow_administer_resource_arns = local.administrator_arns
allow_read_config_arns = local.read_config_arns
allow_read_data_arns = local.read_data_arns
allow_write_data_arns = local.write_data_arns
}
This code enables the following access:
- allow
ci
andperson1
users to administer the bucket - allow
k9-auditor
to read the bucket's configuration - allow
person1
user andappA
role to read and write data from the bucket - deny all other access; this is the tricky bit!
You can see the policy this configuration generates in examples/generated.least_privilege_policy.json.
This module supports the full tagging model described in the k9 Security tagging guide. This tagging model covers resource:
- Identity and Ownership
- Security
- Risk
Most of the tagging model is exposed as optional attributes so that you can adopt it incrementally. See the (S3 bucket API) for the full set of options.
We hope that module instantiation is easy to understand and conveys intent. If you think this can be improved, we would love your feedback as a pull request with a question, clarification, or alternative.
Alternatively, you can create your own S3 bucket policy and provide it to the module using the policy
attribute.
You can also generate a least privilege bucket policy using the k9policy
submodule directly (k9policy API).
This enables you to use a k9 bucket policy with another Terraform module.
Instantiate the k9policy
module directly like this:
module "least_privilege_bucket_policy" {
source = "[email protected]:k9securityio/terraform-aws-s3-bucket.git//k9policy"
s3_bucket_arn = "${module.s3_bucket.bucket_arn}"
allow_administer_resource_arns = local.administrator_arns
allow_read_config_arns = local.read_config_arns
allow_read_data_arns = local.read_data_arns
allow_write_data_arns = local.write_data_arns
}
See the 'minimal' test fixture at test/fixtures/minimal/minimal.tf for complete examples of how to use these S3 bucket and policy modules.
There are at least two ways to migrate to this module:
- if you are already using Terraform and want to try out a better bucket policy, you can use the policy submodule directly. This is described above and demonstrated in the tests.
- if you want to migrate an existing bucket into this Terraform module, you can use
terraform import
orterraform mv
to migrate the AWS bucket resource into a new Terraform module definition.
If you have questions or would like help, feel free to file a PR or contact us privately.
Testing modules locally can be accomplished using a series of Make
tasks
contained in this repo.
Make Task | What happens |
---|---|
all | Execute the canonical build for the generic infrastructure module (does not destroy infra) |
converge | Execute kitchen converge for all modules |
circleci-build | Run a local circleci build |
lint | Execute tflint for generic infrastructure module |
test | Execute kitchen test --destroy=always for all modules |
verify | Execute kitchen verify for all modules |
destroy | Execute kitchen destroy for all modules |
kitchen | Execute kitchen <command> . Specify the command with the COMMAND argument to make |
e.g. run a single test: make kitchen COMMAND="verify minimal-aws"
Typical Workflow:
- Start-off with a clean slate of running test infrastructure:
make destroy; make all
- Make changes and (repeatedly) run:
make converge && make verify
- Rebuild everything from scratch:
make destroy; make all
- Commit and issue pull request
Test Kitchen uses the concept of "instances" as it's medium for multiple test
packages in a project.
An "instance" is the combination of a test suite and a platform.
This project uses a single platform for all specs (e.g. aws
).
The name of this platform actually doesn't matter since the terraform provisioner
and driver are not affected by it.
You can see the available test instances by running the kitchen list
command:
$ make kitchen COMMAND=list
Instance Driver Provisioner Verifier Transport Last Action Last Error
default-aws Terraform Terraform Terraform Ssh Verified
To run Test Kitchen processes for a single instance, you must use the kitchen
target from the make file and pass the command and the instance name using the
COMMAND
variable to make.
# where 'default-aws is an Instance name from kitchen's list
$ make kitchen COMMAND="converge default-aws"