- Description
- Structure and conventions
- Pre-requisites
- Sandbox provisioning
- Verifying results
This repository contains code to automatically provision and configure a sandbox environment for students working on the VHI Operations Professional training course.
This repository is intended for Virtuozzo Technical Trainers to provision a sandbox for students on top of Virtuozzo Hybrid Infrastructure cloud. However, it can benefit anyone with access to an OpenStack or Virtuozzo Hybrid Infrastructure project who wishes to complete the VHI Operations Professional course.
The resulting sandbox will consist of 5 VMs and pre-configured virtual network infrastructure. Here is the diagram of the infrastructure of a sandbox students will work with:
The Terraform plan will not provision node5.lab
VM.
Deploying this VM is one of the exercises students will take during the course.
The repository contains:
- Terraform plan files, ending with
.tf
extension. - Shell scripts, ending with
.sh
extension. - Auxiliary files required for students to complete the course (.zip)
Terraform plan files follow this naming scheme:
00_vars_*.tf
files contain variables.10_data_*.tf
files contain runtime data collection modules.20_res_*.tf
files contain resource definitions.
To use this automation, your environment must meet the requirements described below.
- The OpenStack or VHI cloud must support nested virtualization.
How to test if nested virtualization is enabled.
On Intel CPUs, you can test if the cloud supports nested virtualization by deploying a test VM and executing the following command:
# cat /proc/cpuinfo | grep vmx
The cloud project must provide the following resources:
- vCPU: 74 cores.
- RAM: 148 GiB.
- Disk space: 2000 GiB.
- Public IPs: 2.
The project you are working with must have the following images:
VHI QCOW2 image.
- The image must have
cloud-init
installed.
If you are not a Virtuozzo employee, request the appropriate image from your Onboarding Manager.
Ubuntu 20.04 QCOW2 image.
- The image must have
cloud-init
installed.
You can get the latest version of the image from the official Ubuntu website:
https://cloud-images.ubuntu.com/releases/focal/release/ubuntu-20.04-server-cloudimg-amd64.img
To provision a sandbox, you will need to complete five steps:
- Clone this repository to your workstation.
- Install Terraform on your workstation.
- Adjust Terraform variables.
- Adjust and source the OpenStack credentials file.
- Apply Terraform configuration.
git clone https://github.com/virtuozzo/vhi-ops-professional
cd vhi-ops-professional
Download and install Terraform for your operating system from Terraform website.
You will need to adjust four variable files:
00_vars_access.tf
to set the SSH key path for the sandbox.00_vars_bastion.tf
to set variables related to Bastion VM.00_vars_network.tf
to set variables related to networking.00_vars_vhi_cluster.tf
to set variables related to VHI nodes.
You need to set the ssh_key
variable in the 00_vars_access.tf
file to point to the SSH key.
For example, if your SSH key is located in ~/.ssh/student.pub
, the variable should look like this:
## Bastion/Node access SSH key
variable "ssh-key" {
type = string
default = "~/.ssh/student.pub" # Replace with the path to your public SSH key
}
You need to adjust three variables in 00_vars_bastion.tf
file:
- Bastion image name.
- Bastion flavor.
- Bastion storage policy
You need to set the bastion-image
variable to the name of the Bastion image in your project.
For example, if in your cloud Bastion image is named Ubuntu-20.04
, the variable should look like this:
## Bastion image
variable "bastion-image" {
type = string
default = "Ubuntu-20.04" # If required, replace the image name with the one you have in the cloud
}
You need to set the bastion-flavor
variable to the flavor name that provides at least 2 CPU cores and 4 GiB RAM.
For example, if in your cloud such flavor is named va-2-4
, the variable should look like this:
## Bastion flavor
variable "bastion-flavor" {
type = string
default = "va-2-4" # If required, replace the flavor name with the one you have in the cloud
}
You need to set the bastion-storage_policy
variable to the storage policy with at least 10GB of storage in the project's quota.
For example, if in your cloud such policy is named default
, the variable should look like this:
## Bastion storage policy
variable "bastion-storage_policy" {
type = string
default = "default" # If required, replace the storage policy with the one you have in the cloud
}
You need to adjust four variables in the 00_vars_vhi_cluster.tf
file:
- VHI image name.
- Main node flavor.
- Worker node flavor.
- VHI node storage policy
You need to set the vhi_image
variable to the name of the VHI image in your project.
For example, if in your cloud, the VHI image is named VHI-latest.qcow2
, the variable should look like this:
## VHI image name
variable "vhi-image" {
type = string
default = "VHI-latest.qcow2" # If required, replace the image name with the one you have in the cloud
}
You need to set the flavor_main
variable to the flavor name that provides at least 16 CPU cores and 32 GiB RAM.
For example, if in your cloud such flavor is named va-16-32
, the variable should look like this:
## Main node flavor name
variable "vhi-flavor_main" {
type = string
default = "va-16-32" # If required, replace the flavor name with the one you have in the cloud
}
You need to set the flavor_worker
variable to the flavor name that provides at least 8 CPU cores and 16 GiB RAM.
For example, if in your cloud such flavor is named va-8-16
, the variable should look like this:
## Worker node flavor name
variable "vhi-flavor_worker" {
type = string
default = "va-8-16" # If required, replace the flavor name with the one you have in the cloud
}
You need to set the vhi-storage_policy
variable to the storage policy with at least 1750GB of storage in the project's quota.
For example, if in your cloud such policy is named default
, the variable should look like this:
## VHI node storage policy
variable "vhi-storage_policy" {
type = string
default = "default" # If required, replace the storage policy with the one you have in the cloud
}
You need to set the external_network-name
variable in the 00_vars_networking.tf
file to point to the physical network with Internet access.
For example, if your physical network is called public
, the variable should look like this:
## External network
variable "external_network-name" {
type = string
default = "public" # If required, replace the network name with the one you have in the cloud
}
This repository contains an openstack-creds.sh
file you can adjust to get a usable OpenStack credentials file.
In it, you will need to change some environmental variables related to your OpenStack credentials.
Follow the instructions in the file to get a usable OpenStack credentials file:
export OS_PROJECT_DOMAIN_NAME=vhi-ops # replace "vhi-ops" with your domain name
export OS_USER_DOMAIN_NAME=vhi-ops # replace "vhi-ops" with your domain name
export OS_PROJECT_NAME=student1 # replace "student1" with your project name
export OS_USERNAME=user.name # replace "user.name" with your user name
export OS_PASSWORD=********** # replace "**********" with password of your user
export OS_AUTH_URL=https://mycloud.com:5000/v3 # replace "mycloud.com" with the base URL of your cloud panel
export OS_IDENTITY_API_VERSION=3
export OS_AUTH_TYPE=password
export OS_INSECURE=true
export PYTHONWARNINGS="ignore:Unverified HTTPS request is being made"
export NOVACLIENT_INSECURE=true
export NEUTRONCLIENT_INSECURE=true
export CINDERCLIENT_INSECURE=true
export OS_PLACEMENT_API_VERSION=1.22
export CLIFF_FIT_WIDTH=1
After you adjust the openstack-creds.sh
file, source it in your terminal:
source openstack-creds.sh
Initialize Terraform in the directory and apply Terraform plan that will set up the sandbox:
terraform init && terraform apply
Wait at least 20 minutes before proceeding! Terraform will configure all VMs at first boot, which can take some time depending on the cloud performance and internet connection speed.
After applying the Terraform plan and waiting for scripts to complete the environment's configuration, you may proceed to verify the access.
If you are not a Virtuozzo employee, request Bastion VM credentials from your Onboarding Manager.
Connect to Bastion VM using the remote console. If Bastion VM is still being configured, you will see the following prompt:
Once the configuration of Bastion is complete, you should see the graphical login prompt:
Students are expected to work with their sandbox using an RDP connection to Bastion VM. To verify that the nested VHI cluster is ready for students to begin training, do the following:
- Connect to the Bastion VM using the RDP client on port
3390
. - Access nested VHI Admin Panel using desktop shortcut (username
admin
; password:Lab_admin
):
- Navigate to the Compute section in the left-hand menu:
You should see the compute cluster deployment progress bar:
Once the compute cluster is deployed, the sandbox is ready for use.