Skip to content
This repository has been archived by the owner on Aug 9, 2023. It is now read-only.

Commit

Permalink
Merge pull request #138 from itzhapaz/aws-genomics-cdk
Browse files Browse the repository at this point in the history
Aws genomics cdk
  • Loading branch information
wleepang authored Mar 31, 2021
2 parents f7283d7 + d7fe024 commit 1826068
Show file tree
Hide file tree
Showing 113 changed files with 13,675 additions and 3,160 deletions.
6 changes: 4 additions & 2 deletions .gitignore
Original file line number Diff line number Diff line change
@@ -1,4 +1,3 @@
.DS_Store
.idea
/.idea/markdown-navigator.xml
/.idea/markdown-navigator/profiles_settings.xml
Expand Down Expand Up @@ -58,4 +57,7 @@ __pycache__
publish
launch.sh
LICENSE-*
src/templates/tests
src/templates/tests
/aws-genomics-workflows.iml
_ignore
dist/
1 change: 1 addition & 0 deletions .travis.yml
Original file line number Diff line number Diff line change
Expand Up @@ -33,6 +33,7 @@ deploy:
on:
repo: aws-samples/aws-genomics-workflows
branch: release
tags: true
- provider: script
script: bash _scripts/deploy.sh test
skip_cleanup: true
Expand Down
53 changes: 52 additions & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,6 +4,57 @@

This repository is the source code for [Genomics Workflows on AWS](https://docs.opendata.aws/genomics-workflows). It contains markdown documents that are used to build the site as well as source code (CloudFormation templates, scripts, etc) that can be used to deploy AWS infrastructure for running genomics workflows.

If you want to get the latest version of these solutions up and running quickly, it is recommended that you deploy stacks using the launch buttons available via the [hosted guide](https://docs.opendata.aws/genomics-workflows).

If you want to customize these solutions, you can create your own distribution using the instructions below.

## Creating your own distribution

Clone the repo

```bash
git clone https://github.com/aws-samples/aws-genomics-workflows.git
```

Create an S3 bucket in your AWS account to use for the distribution deployment

```bash
aws s3 mb <dist-bucketname>
```

Create and deploy a distribution from source

```bash
cd aws-genomics-workflows
bash _scripts/deploy.sh --deploy-region <region> --asset-profile <profile-name> --asset-bucket s3://<dist-bucketname> test
```

This will create a `dist` folder in the root of the project with subfolders `dist/artifacts` and `dist/templates` that will be uploaded to the S3 bucket you created above.

Use `--asset-profile` option to specify an AWS profile to use to make the deployment.

**Note**: the region set for `--deploy-region` should match the region the bucket `<dist-bucketname>` is created in.

You can now use your deployed distribution to launch stacks using the AWS CLI. For example, to launch the GWFCore stack:

```bash
TEMPLATE_ROOT_URL=https://<dist-bucketname>.s3-<region>.amazonaws.com/test/templates

aws cloudformation create-stack \
--region <region> \
--stack-name <stackname> \
--template-url $TEMPLATE_ROOT_URL/gwfcore/gwfcore-root.template.yaml \
--capabilities CAPABILITY_IAM CAPABILITY_AUTO_EXPAND \
--parameters \
ParameterKey=VpcId,ParameterValue=<vpc-id> \
ParameterKey=SubnetIds,ParameterValue=\"<subnet-id-1>,<subnet-id-2>,...\" \
ParameterKey=ArtifactBucketName,ParameterValue=<dist-bucketname> \
ParameterKey=TemplateRootUrl,ParameterValue=$TEMPLATE_ROOT_URL \
ParameterKey=S3BucketName,ParameterValue=<store-buketname> \
ParameterKey=ExistingBucket,ParameterValue=false

```

## Building the documentation

The documentation is built using mkdocs.
Expand All @@ -19,7 +70,7 @@ This will create a `conda` environment called `mkdocs`
Build the docs:

```bash
$ source activate mkdocs
$ conda activate mkdocs
$ mkdocs build
```

Expand Down
2 changes: 2 additions & 0 deletions _scripts/configure-deploy.sh
Original file line number Diff line number Diff line change
Expand Up @@ -8,9 +8,11 @@ set -e
mkdir -p $HOME/.aws
cat << EOF > $HOME/.aws/config
[default]
region = us-east-1
output = json
[profile asset-publisher]
region = us-east-1
role_arn = ${ASSET_ROLE_ARN}
credential_source = Environment
EOF
Expand Down
144 changes: 125 additions & 19 deletions _scripts/deploy.sh
Original file line number Diff line number Diff line change
Expand Up @@ -2,12 +2,52 @@

set -e

bash _scripts/make-artifacts.sh
bash _scripts/make-dist.sh
mkdocs build

SITE_BUCKET=s3://docs.opendata.aws/genomics-workflows
ASSET_BUCKET=s3://aws-genomics-workflows
ASSET_STAGE=${1:-production}
ASSET_STAGE=test
ASSET_PROFILE=asset-publisher
DEPLOY_REGION=us-east-1

PARAMS=""
while (( "$#" )); do
case "$1" in
--site-bucket)
SITE_BUCKET=$2
shift 2
;;
--asset-bucket)
ASSET_BUCKET=$2
shift 2
;;
--asset-profile)
ASSET_PROFILE=$2
shift 2
;;
--deploy-region)
DEPLOY_REGION=$2
shift 2
;;
--) # end optional argument parsing
shift
break
;;
-*|--*=)
echo "Error: unsupported argument $1" >&2
exit 1
;;
*) # positional agruments
PARAMS="$PARAMS $1"
shift
;;
esac
done

eval set -- "$PARAMS"

ASSET_STAGE=${1:-$ASSET_STAGE}

function s3_uri() {
BUCKET=$1
Expand All @@ -21,51 +61,117 @@ function s3_uri() {
echo "${BUCKET%/}/${PREFIX:1}"
}

function s3_sync() {
local source=$1
local destination=$2

function artifacts() {
S3_URI=$(s3_uri $ASSET_BUCKET $ASSET_STAGE_PATH "artifacts")

echo "publishing artifacts: $S3_URI"
echo "syncing ..."
echo " from: $source"
echo " to: $destination"
aws s3 sync \
--profile asset-publisher \
--profile $ASSET_PROFILE \
--region $DEPLOY_REGION \
--acl public-read \
--delete \
./artifacts \
$S3_URI
--metadata commit=$(git rev-parse HEAD) \
$source \
$destination
}

function publish() {
local source=$1
local destination=$2

if [[ $USE_RELEASE_TAG && ! -z "$TRAVIS_TAG" ]]; then
# create explicit pinned versions "latest" and TRAVIS_TAG
# pin the TRAVIS_TAG first, since the files are modified inplace
# "latest" will inherit the TRAVIS_TAG value
echo "PINNED VERSION: $TRAVIS_TAG"
for version in $TRAVIS_TAG latest; do
S3_URI=$(s3_uri $ASSET_BUCKET $ASSET_STAGE_PATH $version $destination)

if [[ "$destination" == "templates" ]]; then
# pin distribution template and artifact paths in cfn templates
pin_version $version templates $source
pin_version $version artifacts $source
fi

s3_sync $source $S3_URI
done
elif [[ $ASSET_STAGE == "test" ]]; then
echo "PINNED VERSION: $ASSET_STAGE"
version=$ASSET_STAGE
S3_URI=$(s3_uri $ASSET_BUCKET $ASSET_STAGE_PATH $destination)

if [[ "$destination" == "templates" ]]; then
# pin distribution template and artifact paths in cfn templates
pin_version $version templates $source
pin_version $version artifacts $source
fi

s3_sync $source $S3_URI
else
echo "unknown publish target"
exit 1
fi

}


function pin_version() {
# locates parameters in cfn templates files in {folder} that need to be version pinned
# using the locator pattern: "{asset}\s{2}# dist: {action}"
# replaces the locator pattern with: "{version}/{asset} #"
local version=$1
local asset=$2
local folder=$3

echo "PINNING VERSIONS"
for file in `grep -irl "$asset # dist: pin_version" $folder`; do
echo "pinning '$asset' as '$version/$asset' in '$file'"
sed -i'' -e "s|$asset # dist: pin_version|$version/$asset #|g" $file
done
}


function artifacts() {

publish ./dist/artifacts artifacts

}


function templates() {
S3_URI=$(s3_uri $ASSET_BUCKET $ASSET_STAGE_PATH "templates")

echo "publishing templates: $S3_URI"
aws s3 sync \
--profile asset-publisher \
--acl public-read \
--delete \
--metadata commit=$(git rev-parse HEAD) \
./src/templates \
$S3_URI
publish ./dist/templates templates

}


function site() {
echo "publishing site"
aws s3 sync \
--region $DEPLOY_REGION \
--acl public-read \
--delete \
--metadata commit=$(git rev-parse HEAD) \
./site \
s3://docs.opendata.aws/genomics-workflows
$SITE_BUCKET
}


function all() {
artifacts
templates
site
}


echo "DEPLOYMENT STAGE: $ASSET_STAGE"
case $ASSET_STAGE in
production)
ASSET_STAGE_PATH=""
USE_RELEASE_TAG=1
all
;;
test)
Expand Down
34 changes: 0 additions & 34 deletions _scripts/make-artifacts.sh

This file was deleted.

Loading

0 comments on commit 1826068

Please sign in to comment.