Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

demo: add Azure remote demo #295

Merged
merged 8 commits into from
Nov 13, 2020
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1 change: 1 addition & 0 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -15,6 +15,7 @@ pkg/
# Terraform state files.
*.tfstate
*.tfstate.backup
*.tfstate.*.backup
.terraform.tfstate.lock.info

# Terraform module and provider directory.
Expand Down
1 change: 1 addition & 0 deletions demo/remote/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -18,6 +18,7 @@ In order to build and run the demo, the following applications are required loca
There are specific steps to build the infrastructure depending on which provider you wish to use.
Please navigate to the appropriate section below.
* [Amazon Web Services](./aws.md)
* [Microsoft Azure](./azure/README.md)

## The Demo
The steps below this point are generic across providers and form the main part of this demo. Enjoy.
Expand Down
210 changes: 210 additions & 0 deletions demo/remote/azure/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,210 @@
### Microsoft Azure
Some of the steps below will require that you have the
[Azure CLI installed](https://docs.microsoft.com/en-us/cli/azure/install-azure-cli)
in your machine.

### Environment setup
The first step is to login using the CLI:

```shellsession
$ az login
[
{
"cloudName": "AzureCloud",
"id": "<SUBSCRIPTION_ID>",
"isDefault": true,
"name": "Free Trial",
"state": "Enabled",
"tenantId": "<TENANT_ID>",
"user": {
"name": "[email protected]",
"type": "user"
}
}
```

Take a note of the values for `<SUBSCRIPTION_ID>` and `<TENANT_ID>` end export
them as environment variables:

```shellsession
$ export ARM_SUBSCRIPTION_ID=<SUBSCRIPTION_ID>
$ export ARM_TENANT_ID=<TENANT_ID>
```

Next, create an application ID and password that will be used to run Terraform:

```shellsession
$ az ad sp create-for-rbac --role="Owner" --scopes="/subscriptions/$ARM_SUBSCRIPTION_ID"
{
"appId": "<CLIENT_ID>",
"displayName": "azure-cli-...",
"name": "http://azure-cli-...",
"password": "<CLIENT_SECRET>",
"tenant": "<TENANT_ID>"
}
```

Export the values for `<CLIENT_ID>` and `<CLIENT_SECRET>` as environment
variables as well:

```shellsession
$ export ARM_CLIENT_ID=<CLIENT_ID>
$ export ARM_CLIENT_SECRET=<CLIENT_SECRET>
```

# Running Terraform
Navigate to the Terraform control folder and execute the Terraform
configuration to deploy the demo infrastructure:

```shellsession
$ cd ./terraform/control
$ terraform init
$ terraform plan
$ terraform apply --auto-approve
```

Once the Terraform apply finishes, a number of useful pieces of information
should be output to your console. These include URLs to deployed resources as
well as a semi-populated Nomad Autoscaler config.

```
ip_addresses =
Server IPs:
* instance server-1 - Public: 52.188.111.20, Private: 10.0.2.4


To connect, add your private key and SSH into any client or server with
`ssh -i azure-hashistack.pem -o IdentitiesOnly=yes ubuntu@PUBLIC_IP`.
You can test the integrity of the cluster by running:

$ consul members
$ nomad server members
$ nomad node status

The Nomad UI can be accessed at http://52.249.185.10:4646/ui
The Consul UI can be accessed at http://52.249.185.10:8500/ui
Grafana dashbaord can be accessed at http://52.249.187.190:3000/d/AQphTqmMk/demo?orgId=1&refresh=5s
Traefik can be accessed at http://52.249.187.190:8081
Prometheus can be accessed at http://52.249.187.190:9090
Webapp can be accessed at http://52.249.187.190:80

CLI environment variables:
export NOMAD_CLIENT_DNS=http://52.249.187.190
export NOMAD_ADDR=http://52.249.185.10:4646
```

You can visit the URLs and explore what has been created. This will include
registration of a number of Nomad jobs which provide metrics and dashboarding
as well as a demo application and routing provided by Traefik. It may take a
few seconds for all the applications to start, so if any of the URLs doesn't
load the first time, wait a little bit and give it a try again.

Please also copy the export commands and run these in the terminal where the
rest of the demo will be run.

The application is pre-configured with a scaling policy, you can view this by
opening the job file or calling the Nomad API. The application scales based on
the average number of active connections, and we are targeting an average of 10
per instance of our app.
```
curl $NOMAD_ADDR/v1/scaling/policies
```

## Run the Autoscaler
The Autoscaler is not triggered automatically. This provides the opportunity to
look through the jobfile to understand it better before deploying. The most
important parts of the `azure_autoscaler.nomad` file are the template sections.
The first defines our agent config where we configure the `prometheus`,
`azure-vmss` and `target-value` plugins.

```hcl
template {
data = <<EOF
nomad {
address = "http://{{env "attr.unique.network.ip-address" }}:4646"
}

apm "prometheus" {
driver = "prometheus"
config = {
address = "http://{{ range service "prometheus" }}{{ .Address }}:{{ .Port }}{{ end }}"
}
}

target "azure-vmss" {
driver = "azure-vmss"
config = {
subscription_id = "${subscription_id}"
}
}

strategy "target-value" {
driver = "target-value"
}
EOF

destination = "$${NOMAD_TASK_DIR}/config.hcl"
}
```

The second is where we define our cluster scaling policy and write this to a
local directory for reading.

```hcl
template {
data = <<EOF
enabled = true
min = 1
max = 2

policy {

cooldown = "2m"
evaluation_interval = "1m"

check "cpu_allocated_percentage" {
source = "prometheus"
query = "sum(nomad_client_allocated_cpu{node_class=\"hashistack\"}*100/(nomad_client_unallocated_cpu{node_class=\"hashistack\"}+nomad_client_allocated_cpu{node_class=\"hashistack\"}))/count(nomad_client_allocated_cpu{node_class=\"hashistack\"})"

strategy "target-value" {
target = 70
}
}

check "mem_allocated_percentage" {
source = "prometheus"
query = "sum(nomad_client_allocated_memory{node_class=\"hashistack\"}*100/(nomad_client_unallocated_memory{node_class=\"hashistack\"}+nomad_client_allocated_memory{node_class=\"hashistack\"}))/count(nomad_client_allocated_memory{node_class=\"hashistack\"})"

strategy "target-value" {
target = 70
}
}

target "azure-vmss" {
resource_group = "${resource_group}"
vm_scale_set = "clients"
node_class = "hashistack"
node_drain_deadline = "5m"
}
}
EOF

destination = "$${NOMAD_TASK_DIR}/policies/hashistack.hcl"
}
```

Once you have an understanding of the job file, submit it to the Nomad cluster
ensuring the `NOMAD_ADDR` env var has been exported.

```shellsession
$ nomad run azure_autoscaler.nomad
```

If you wish, in another terminal window you can export the `NOMAD_ADDR` env var
and then follow the Nomad Autoscaler logs.

```
$ nomad logs -stderr -f <alloc-id>
```

You can now return to the [demo instrunctions](../README.md#the-demo).
45 changes: 45 additions & 0 deletions demo/remote/azure/packer/azure-packer.pkr.hcl
Original file line number Diff line number Diff line change
@@ -0,0 +1,45 @@
variable "client_id" {}
variable "client_secret" {}
variable "resource_group" {}
variable "subscription_id" {}
variable "location" { default = "East US" }
variable "image_name" { default = "hashistack" }

source "azure-arm" "hashistack" {
azure_tags = {
Product = "Hashistack"
}
client_id = "${var.client_id}"
client_secret = "${var.client_secret}"
image_offer = "UbuntuServer"
image_publisher = "Canonical"
image_sku = "18.04-LTS"
location = "${var.location}"
managed_image_name = "${var.image_name}"
managed_image_resource_group_name = "${var.resource_group}"
os_type = "Linux"
ssh_username = "packer"
subscription_id = "${var.subscription_id}"
}

build {
sources = [
"source.azure-arm.hashistack"
]

provisioner "shell" {
inline = [
"sudo mkdir -p /ops",
"sudo chmod 777 /ops"
]
}

provisioner "file" {
source = "../../shared/packer/"
destination = "/ops"
}

provisioner "shell" {
script = "../../shared/packer/scripts/setup.sh"
}
}
2 changes: 2 additions & 0 deletions demo/remote/azure/terraform/.gitignore
Original file line number Diff line number Diff line change
@@ -0,0 +1,2 @@
azure-hashistack.pem
azure_autoscaler.nomad
26 changes: 26 additions & 0 deletions demo/remote/azure/terraform/control/main.tf
Original file line number Diff line number Diff line change
@@ -0,0 +1,26 @@
provider "nomad" {
address = module.hashistack_cluster.nomad_addr
}

module "my_ip_address" {
source = "matti/resource/shell"

command = "curl https://ipinfo.io/ip"
}

module "hashistack_cluster" {
source = "../modules/azure-hashistack"

allowlist_ip = ["${module.my_ip_address.stdout}/32"]
lgfa29 marked this conversation as resolved.
Show resolved Hide resolved

# Use beta releases until GA
nomad_binary = "https://releases.hashicorp.com/nomad/1.0.0-beta2/nomad_1.0.0-beta2_linux_amd64.zip"
nomad_autoscaler_image = "hashicorp/nomad-autoscaler:0.2.0-beta2"
}

module "hashistack_jobs" {
source = "../../../terraform/modules/shared-nomad-jobs"
depends_on = [module.hashistack_cluster]

nomad_addr = module.hashistack_cluster.nomad_addr
}
28 changes: 28 additions & 0 deletions demo/remote/azure/terraform/control/outputs.tf
Original file line number Diff line number Diff line change
@@ -0,0 +1,28 @@
output "ip_addresses" {
value = <<CONFIGURATION

Server IPs:
${module.hashistack_cluster.server_addresses}


To connect, add your private key and SSH into any client or server with
`ssh -i azure-hashistack.pem -o IdentitiesOnly=yes ubuntu@PUBLIC_IP`.
You can test the integrity of the cluster by running:

$ consul members
$ nomad server members
$ nomad node status

The Nomad UI can be accessed at ${module.hashistack_cluster.nomad_addr}/ui
The Consul UI can be accessed at ${module.hashistack_cluster.consul_addr}/ui
Grafana dashbaord can be accessed at http://${module.hashistack_cluster.clients_lb_public_ip}:3000/d/AQphTqmMk/demo?orgId=1&refresh=5s
Traefik can be accessed at http://${module.hashistack_cluster.clients_lb_public_ip}:8081
Prometheus can be accessed at http://${module.hashistack_cluster.clients_lb_public_ip}:9090
Webapp can be accessed at http://${module.hashistack_cluster.clients_lb_public_ip}:80

CLI environment variables:
export NOMAD_CLIENT_DNS=http://${module.hashistack_cluster.clients_lb_public_ip}
export NOMAD_ADDR=${module.hashistack_cluster.nomad_addr}

CONFIGURATION
}
63 changes: 63 additions & 0 deletions demo/remote/azure/terraform/modules/azure-hashistack/clients.tf
Original file line number Diff line number Diff line change
@@ -0,0 +1,63 @@
resource "azurerm_linux_virtual_machine_scale_set" "clients" {
depends_on = [
azurerm_lb_rule.clients_nomad,
azurerm_lb_rule.clients_consul,
azurerm_lb_rule.clients_grafana,
azurerm_lb_rule.clients_prometheus,
azurerm_lb_rule.clients_traefik,
azurerm_lb_rule.clients_http,
]

name = "clients"
location = azurerm_resource_group.hashistack.location
resource_group_name = azurerm_resource_group.hashistack.name
sku = var.client_vm_size
source_image_id = data.azurerm_image.hashistack.id
custom_data = base64encode(data.template_file.user_data_client.rendered)
instances = var.client_count
admin_username = "ubuntu"

network_interface {
name = "client-vmss-ni"
primary = true
network_security_group_id = azurerm_network_security_group.nomad_clients.id

ip_configuration {
name = "PrivateIPConfiguration"
primary = true
subnet_id = azurerm_subnet.primary.id
load_balancer_backend_address_pool_ids = [azurerm_lb_backend_address_pool.clients_lb.id]
public_ip_address {
name = "client-vmss-public-ip"
}
}
}

os_disk {
caching = "ReadWrite"
storage_account_type = "Standard_LRS"
}

admin_ssh_key {
username = "ubuntu"
public_key = tls_private_key.main.public_key_openssh
}

identity {
type = "UserAssigned"
identity_ids = [azurerm_user_assigned_identity.clients_vmss.id]
}
}

# Managed identity
resource "azurerm_user_assigned_identity" "clients_vmss" {
name = "clients-vmss"
resource_group_name = azurerm_resource_group.hashistack.name
location = azurerm_resource_group.hashistack.location
}

resource "azurerm_role_assignment" "clients_vmss" {
scope = data.azurerm_subscription.main.id
role_definition_name = "Contributor"
principal_id = azurerm_user_assigned_identity.clients_vmss.principal_id
}
Loading