Quorum-AWS has been deprecated, and we are no longer supporting the project.
It has been replaced by quorum-terraform that offers wider compatibility with Quorum products and cloud providers
We encourage all users with active projects to migrate to quorum-terraform
If you have any questions or concerns, please reach out to the ConsenSys protocol engineering team on #Discord or by email.
This repo contains the tools we use to deploy test Quorum clusters to AWS.
- We use Docker to build images for quorum, constellation, and this codebase (quorum-aws, which extends quorum-tools).
- Docker images are pushed to AWS' ECS repositories.
- We use Terraform to provision single-region (cross-availability-zone) and multi-region (cross-internet) Quorum clusters using these images.
With a little bit of time and an AWS account, you should be able to use this project to easily deploy a Quorum cluster to AWS.
- Installed software: Docker, Terraform, stack, jq, and awscli
- awscli needs to be configured to talk to AWS (see the user guide or use
aws configure help
) vim terraform/secrets/terraform.tfvars
to reflect AWS credentials in~/.aws/credentials
From the root of this project, you can execute the following two scripts in order to build Docker images for quorum, constellation, and quorum-aws. The latter will be built both locally and in docker (to be deployed to AWS.) We need to build and push these Docker images before we can run a cluster on AWS.
If we haven't already, we need to pull Quorum and Constellation down into the dependencies
directory:
git submodule init && git submodule update
Then build the Docker images and push them to ECS repositories:
./build && ./push
Error 137 is generally a sign that you should configure Docker with more memory.
In order to manage terraformed infrastructure across different regions and clusters, instead of using the terraform
binary directly, we use (symlinks to) a wrapper script (around the terraform
binary) to automatically set variables and state output locations per environment. Take a look inside terraform/bin
to see how this works:
> ls -al terraform/bin
total 64
drwxr-xr-x 11 bts staff 374 Oct 11 15:13 .
drwxr-xr-x 13 bts staff 442 Oct 11 15:35 ..
drwxr-xr-x 3 bts staff 102 Oct 11 15:58 .bin
-rwxr-xr-x 1 bts staff 793 Oct 11 14:39 .multi-start-cluster
-rwxr-xr-x 1 bts staff 812 Oct 11 14:39 .multi-start-tunnels
lrwxr-xr-x 1 bts staff 16 Oct 2 11:42 demo -> .bin/env-wrapper
lrwxr-xr-x 1 bts staff 16 Oct 2 11:42 global -> .bin/env-wrapper
lrwxr-xr-x 1 bts staff 16 Oct 2 11:42 intl-ireland -> .bin/env-wrapper
lrwxr-xr-x 1 bts staff 16 Oct 2 11:42 intl-tokyo -> .bin/env-wrapper
lrwxr-xr-x 1 bts staff 16 Oct 2 11:42 intl-virginia -> .bin/env-wrapper
-rwxr-xr-x 1 bts staff 235 Oct 11 15:13 multi-start
Here, demo
is a symlink to the wrapper script that will invoke Terraform in a such a way that it knows we are concerned with the "demo" environment. Instead of using the terraform
binary directly (e.g. terraform plan
), we issue the same Terraform CLI commands to the wrapper script (e.g. bin/demo plan
).
The pre-supplied binary wrappers have the following purposes:
global
environment contains IAM infrastructure that is not particular to any one AWS region, and will beapply
ed only once.demo
is the default name of a single-region cluster that will be deployed tous-east-1
.intl-ireland
,intl-tokyo
, andintl-virginia
contain the infrastructure respectively for 3 different regions in an international cluster. This infrastructure lives in separate files because Terraform is hard-coded to support at most one region permain.tf
file.
If you want, you can simply make a new symlink (in terraform/bin
) to terraform/bin/.bin/env-wrapper
named whatever you like (eg. mycluster
), and then you can use that script to launch a new cluster with that name.
Because we're using the aws
and null
Terraform plugins, we need to initialize them:
terraform init
The following only needs to be done once to deploy some Identity and Access Management (IAM) infrastructure that we re-use across clusters:
cd terraform
bin/global apply
If at some point in the future you want to destroy this infrastructure, you can run bin/global destroy
.
cd terraform
For a given Terraform environment, we can use the normal Terraform commands like plan
, show
, apply
, destroy
, and output
to work with a single-region cluster:
bin/demo plan
shows us what infrastructure will be provisioned if we decide toapply
bin/demo apply
creates the infrastructure. In a single-region setting, this also automatically starts the Quorum cluster.bin/demo show
reports the current Terraform state for the environmentbin/demo output
can print the value for an output variable listed inoutput.tf
. e.g.:bin/demo output geth1
. This can be handy to easily SSH into a node in the cluster: e.g. tryssh ubuntu@$(bin/demo output geth1)
orssh ubuntu@$(bin/demo output geth2)
.
Once SSH'd in to a node, we can use a few utility scripts that have been installed in the ubuntu
user's homedir to interact with geth
:
./spam 10
will send in 10 transactions per second until^C
stops it./follow
shows the end of the (tail -f
/followed)geth
log./attach
attaches to the localgeth
process.exit
At this point, if we like, we can destroy the cluster:
bin/demo destroy
At the moment, this is slightly more involved than deployment for a single-region cluster. Symlinks (in terraform/bin
) are currently set up for one multi-region called "intl" that spans three regions. Because ireland
is set up in this cluster to be "geth 1", it performs the side effect of generating a cluster-data
directory that will be used for the other two regions. So, we provision ireland
first:
bin/intl-ireland apply
Then we can provision tokyo
and virginia
. You can do these two steps in parallel (e.g. in different terminals) if you'd like:
bin/intl-tokyo apply
bin/intl-virginia apply
Once all three regions have been provisioned, we need to start the cluster. In single-region clusters this is done automatically, but in multi-region clusters, it's manual. This will set up SSH tunnels between regions for secure communication between them, then start constellation and quorum on each node. Note here we specify the name of the cross-region cluster, intl
.
bin/multi-start intl
At this point, we should be able to log in to one of the nodes and see the cluster in action:
ssh ubuntu@$(bin/intl-virginia output eip)
whereeip
stands for Elastic IP, the static IP address other nodes in the cluster can use to connect to this one../spam 10
send in 10 transactions per second for a few seconds, then^C
to stop it./follow
shows the end of the (tail -f
/followed)geth
log, or./attach
attaches to the local node.