A simple tool to deploy static websites to Amazon S3 and CloudFront with Gzip and custom headers support (e.g. "Cache-Control"). It uses ETag hashes to check if a file has changed, which makes it optimal in combination with static site generators like Hugo.
Pre-built binaries can be found here.
s3deploy is a Go application, so you can also get and build it yourself via go get
:
go get -u -v github.com/sequra/s3deploy
To install on MacOS using Homebrew:
brew install sequra/tap/s3deploy
Usage of s3deploy:
-V print version and exit
-acl string
provide an ACL for uploaded objects. to make objects public, set to 'public-read'. all possible values are listed here: https://docs.aws.amazon.com/AmazonS3/latest/userguide/acl-overview.html#canned-acl (default "private")
-bucket string
destination bucket name on AWS
-config string
optional config file (default ".s3deploy.yml")
-distribution-id string
optional CDN distribution ID for cache invalidation
-force
upload even if the etags match
-h help
-key string
access key ID for AWS
-max-delete int
maximum number of files to delete per deploy (default 256)
-path string
optional bucket sub path
-quiet
enable silent mode
-region string
name of AWS region
-secret string
secret access key for AWS
-source string
path of files to upload (default ".")
-try
trial run, no remote updates
-v enable verbose logging
-workers int
number of workers to upload files (default -1)
- The
key
andsecret
command flags can also be set with environment variablesAWS_ACCESS_KEY_ID
andAWS_SECRET_ACCESS_KEY
. - The
region
flag is the AWS API name for the region where your bucket resides. See the table below or the AWS Regions documentation file for an up-to-date version.
Bucket region | API value | Bucket region | API value |
---|---|---|---|
Canada (Central) | ca-central-1 |
Asia Pacific (Mumbai) | ap-south-1 |
US East (Ohio) | us-east-2 |
Asia Pacific (Seoul) | ap-northeast-2 |
US East (N. Virginia) | us-east-1 |
Asia Pacific (Singapore) | ap-southeast-1 |
US West (N. California) | us-west-1 |
Asia Pacific (Sydney) | ap-southeast-2 |
US West (Oregon) | us-west-2 |
Asia Pacific (Tokyo) | ap-northeast-1 |
EU (Frankfurt) | eu-central-1 |
China (Beijing) | cn-north-1 |
EU (Ireland) | eu-west-1 |
China (Ningxia) | cn-northwest-1 |
EU (London) | eu-west-2 |
||
EU (Paris) | eu-west-3 |
||
South America (São Paulo) | sa-east-1 |
See https://docs.aws.amazon.com/sdk-for-go/api/aws/session/#hdr-Sessions_from_Shared_Config
The AWS SDK
will fall back to credentials from ~/.aws/credentials
.
If you set the AWS_SDK_LOAD_CONFIG
enviroment variable, it will also load shared config from ~/.aws/config
where you can set the global region
to use if not provided etc.
Add a .s3deploy.yml
configuration file in the root of your site. Example configuration:
routes:
- route: "^.+\\.(js|css|svg|ttf)$"
# cache static assets for 20 years
headers:
Cache-Control: "max-age=630720000, no-transform, public"
gzip: true
- route: "^.+\\.(png|jpg)$"
headers:
Cache-Control: "max-age=630720000, no-transform, public"
gzip: false
- route: "^.+\\.(html|xml|json)$"
gzip: true
Deploy order is important sometimes. For instance, when you want to deploy a SPA (Single Page Application), index.html
and all those files not versioned must be deployed at the end to avoid problems with missing resources.
To specify a deploy order, add the order
section to your .s3deploy.yml
as follows:
routes:
# your routes here
- ...
order:
- "^notmatchingfile$"
- "^index\\.html$"
Order groups work following these points:
- Rules are written as regular expressions to match files.
- There is always an implicit order group (present at first position), which contains all files not matched by other order groups.
- Order in array is the one followed on deploys. In previous example, imagine we have the following files:
test.css
,test.txt
,test.html
,index.html
, andtest.js
. All files exceptindex.html
will be deployed to S3, and then (once all files are uploaded successfully)index.html
is deployed. - Can be empty groups, it means, regular expressions that does not match any files. In this case, this group is ignored in deploys. In previous example, it corresponds to the first appearing rule (
"^notmatchingfile$"
).
Paths starting with a dot are considered hidden paths and s3deploy
ignores them. This is the typical case when you want to ignore .git
path. However, sometimes they are needed, for instance when folder .well-known
should be uploaded to S3.
In order to allow these hidden folders, you can add to the .s3deploy.yml
file:
routes:
# your routes here
- ...
dotallowpaths:
- .well-known
{
"Version": "2012-10-17",
"Statement":[
{
"Effect":"Allow",
"Action":[
"s3:ListBucket",
"s3:GetBucketLocation"
],
"Resource":"arn:aws:s3:::<bucketname>"
},
{
"Effect":"Allow",
"Action":[
"s3:PutObject",
"s3:PutObjectAcl",
"s3:DeleteObject"
],
"Resource":"arn:aws:s3:::<bucketname>/*"
}
]
}
Replace with your own.
If you have configured CloudFront CDN in front of your S3 bucket, you can supply the distribution-id
as a flag. This will make sure to invalidate the cache for the updated files after the deployment to S3. Note that the AWS user must have the needed access rights.
Note that CloudFront allows 1,000 paths per month at no charge, so S3deploy tries to be smart about the invalidation strategy; we try to reduce the number of paths to 8. If that isn't possible, we will fall back to a full invalidation, e.g. "/*".
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:ListBucket",
"s3:GetBucketLocation"
],
"Resource": "arn:aws:s3:::<bucketname>"
},
{
"Effect": "Allow",
"Action": [
"s3:PutObject",
"s3:DeleteObject",
"s3:PutObjectAcl"
],
"Resource": "arn:aws:s3:::<bucketname>/*"
},
{
"Effect": "Allow",
"Action": [
"cloudfront:GetDistribution",
"cloudfront:CreateInvalidation"
],
"Resource": "*"
}
]
}
If you're looking at s3deploy
then you've probably already seen the aws s3 sync
command - this command has a sync-strategy that is not optimised for static sites, it compares the timestamp and size of your files to decide whether to upload the file.
Because static-site generators can recreate every file (even if identical) the timestamp is updated and thus aws s3 sync
will needlessly upload every single file. s3deploy
on the other hand checks the etag hash to check for actual changes, and uses that instead.