After running pentagon start-project
you will have a directory with a layout similiar to:
.
├── README.md
├── ansible-requirements.yml
├── inventory/
├── docs/
├── plugins/
└── requirements.txt
See also Extended Layout
Generally speaking, the layout of the infrastructure repository is heierachical. That is to say, higher level directories contain scripts, resources, and variables that are intended to be used earlier in the creation of your infrastructure.
The inventory directory is used to store an arbitrary segment of your infrastructure. It can be a separate AWS account, AWS VPC, GCP Project or, GCP Netrowk. It can be as fine grained as you like, but the config directory in each "inventory item" is scoped to, at most, one AWS Account+VPC or one GCP Project+Network. By default, the inventory
directoy includes one default
directory with configurtion for one VPC and two Kops clusters. You can pass pentagon start-project
the --no-configure
flag to build your own.
The config directory is separated into local
and private
. Files, scripts, and templates in config/local
are checked into source control and should not contain any workstation specific values.
config/local/env-vars.sh
uses a specific list of variable names, locates the values in config/local/vars.yml
and config/private/secrets.yml
and exports them as an environment variable. These environment variables are used throughout the infrastructure repository so make sure you source config/local/env-vars.sh
.
Some configurations require absolute paths which, if checked into source control, can make working with teams challenging. The config/local/local-config-init
script makes this easier by providing a fast way to generate workstation specific configurations from the ansible.cfg-default
and ssh_config-default
template files. The generated workstation specific configuration files are written to config/private
.
config/private/ssh_config
and config/private/ansible.cfg
greatly simplify interaction with your cloud VMs. It is configured to automatically use the correct key and user name based on the IP address of the host. You can either use the command ssh -F '${INFRASTRUCTURE_REPO}/config/local/ssh_config
or alias SSH with alias ssh="ssh -F '${INFRASTRUCTURE_REPO}/config/local/ssh_config'
.
config/private
, in addition to secrets.yml
also contains SSH keys generated by start-project
. Unless you opted to not create the keys, the admin-vpn
key pair will be uploaded to AWS for you when the VPN instance is created and the *-kube
keys will automatically be uploaded when kops
is invoked to create the Kubernetes cluster. The other keys, production-private
, working-private
are created as a convenience to be used for any instances that are created in the VPC private-working
and private-production
subnets. When kops
is invoked to create the cluster, the Kubernetes config secret will also be created as config/private/kube_config
The default/
contains most of the moving parts of the infrastructure repository. The name default
is not important! The contents are. The goal is that the contents of the default
directory can be deep copied and create parallel (cloud provider, cloud account, vpc) infrastructure in a single repository. Consider this a guidline, not a rule!
├── clusters
├── resources
└── vpc
Contains working/
and production/
directories. Both are laid out identically.
working
is intended to contain any non-production Kubernetes pods, deployments, services. production
is intended to contain any production Kubernetes objects pods, deployments, services etc.
├── kops.sh
├── cluster.yml
├── nodes.yml
├── masters.yml
└── secret.sh
kops.sh
is a bash script that uploads the yml files the S3 bucket set in inventory/(default)/config/local/vars.yml
secrets.sh
creates the secret that is the ssh public key mateial for the the nodes in the cluster
The terraform/
directory is for the AWS VPC Terraform. It is intended to hold the configuration for all Terraform for the "inventory item." Terraform modules should be used to organize the Terraform code.
This resources/
is the directory into which Ansible playbooks to non cluster specific cloud resources can be stored. The admin-environment
playbook, which creates and configures the OpenVPN instance, is present "out of the box".
This is the Ansible plugins directory. The ec2
infrastructure plugin is enabled by default. Set in config/private/ansible.cfg
.
The Ansible roles are installed here by default. Set in config/private/ansible.cfg
.
This is not checked into Git.
├── README.md
├── ansible-requirements.yml
├── config.yml
├── inventory
│ └── default * Directory for default cloud
│ ├── clusters * Directory for Clusters
│ │ ├── production * Production Cluster Directory
│ │ │ └── vars.yml * Variables specific to production. Used by `pentagon add kops.cluster`
│ │ └── working * Working Cluster Directory
│ │ └── vars.yml * Variables specific to working. Used by `pentagon add kops.cluster`
│ ├── config * Configuration Directory
│ │ ├── local * Local, non-secret configuration
│ │ │ ├── ansible.cfg-default * templating code to create private configuration
│ │ │ ├── local-config-init
│ │ │ ├── ssh_config-default
│ │ │ └── vars.yml
│ │ └── private * Private, secret configs. ignored by git
│ │ ├── admin-vpn * SSH key pairs generated by at `start-project`
│ │ ├── admin-vpn.pub
│ │ ├── production-kube
│ │ ├── production-kube.pub
│ │ ├── production-private
│ │ ├── production-private.pub
│ │ ├── secrets.yml * Secret values in yaml config file
│ │ ├── working-kube
│ │ ├── working-kube.pub
│ │ ├── working-private
│ │ └── working-private.pub
│ ├── kubernetes * You can store kubernetes manifests here
│ ├── resources * Ansible playbook for creating the OpenVPN instance
│ │ └── admin-environment
│ │ ├── destroy.yml
│ │ ├── env.yml
│ │ └── vpn.yml
│ └── terraform * Terraform for entire inventory item
│ ├── aws_vpc.auto.tfvars
│ ├── aws_vpc.tf
│ ├── aws_vpc_variables.tf
│ ├── backend.tf
│ └── provider.tf
├── plugins * Ansible plugins
└── requirements.txt