This repository will guide you through the preparation process of building an AKS cluster using the Azure Cloud Shell that will be used in Tigera's Calico Cloud workshop. The goal is to reduce the time used for setting up infrastructure during the workshop, optimizing the Calico Cloud learning and ensuring everyone is on the same page.
The following are the basic requirements to start the workshop.
- Azure Account Azure Portal
- Git Git SCM
- Azure Cloud Shell https://shell.azure.com
- Azure AKS Cluster
-
Login to Azure Portal at http://portal.azure.com.
-
Open the Azure Cloud Shell and choose Bash Shell (do not choose Powershell)
-
The first time Cloud Shell is started will require you to create a storage account.
Note: In the cloud shell, you are automatically logged into your Azure subscription.
-
[Optional] If you have more than one Azure subscription, ensure you are using the one you want to deploy AKS to.
View subscriptions
az account list
Verify selected subscription
az account show
Set correct subscription (if needed)
az account set --subscription <subscription_id>
Verify correct subscription is now set
az account show
-
Configure the kubectl autocomplete.
source <(kubectl completion bash) && source /usr/share/bash-completion/bash_completion echo "source <(kubectl completion bash)" >> ~/.bashrc echo "source /usr/share/bash-completion/bash_completion" >> ~/.bashrc
You can also use a shorthand alias for kubectl that also works with completion:
alias k=kubectl complete -o default -F __start_kubectl k echo "alias k=kubectl" >> ~/.bashrc echo "complete -o default -F __start_kubectl k" >> ~/.bashrc
-
[Optional] Instal k9s, if you like it.
curl --silent --location "https://github.com/derailed/k9s/releases/download/v0.32.3/k9s_Linux_amd64.tar.gz" | tar xz -C /tmp mkdir -p ~/.local/bin mv /tmp/k9s ~/.local/bin k9s version
-
Define the environment variables to be used by the resources definition.
NOTE: In the following section, we'll create some environment variables. If your terminal session restarts, you may need to reset these variables. You can do that using the following command:
source ~/workshopvars.env
export RESOURCE_GROUP=tigera-workshop export CLUSTERNAME=aks-workshop export LOCATION=canadacentral # Persist for later sessions in case of disconnection. echo export RESOURCE_GROUP=$RESOURCE_GROUP > ~/workshopvars.env echo export CLUSTERNAME=$CLUSTERNAME >> ~/workshopvars.env echo export LOCATION=$LOCATION >> ~/workshopvars.env
-
If not created, create the Resource Group in the desired Region.
az group create \ --name $RESOURCE_GROUP \ --location $LOCATION
-
Create the AKS cluster with Azure CNI network plugin.
az aks create \ --resource-group $RESOURCE_GROUP \ --name $CLUSTERNAME \ --kubernetes-version 1.29 \ --location $LOCATION \ --node-count 2 \ --node-vm-size Standard_B2ms \ --max-pods 100 \ --generate-ssh-keys \ --network-plugin azure
-
Verify your cluster status. The
ProvisioningState
should beSucceeded
az aks list -o table | grep $CLUSTERNAME
You may get an output like the following:
WARNING: [Warning] This output may compromise security by showing the following secrets: keyData, ssh, linuxProfile, publicKeys. Learn more at: https://go.microsoft.com/fwlink/?linkid=2258669 aks-workshop canadacentral tigera-workshop 1.29 1.29.2 Succeeded aks-worksh-tigera-workshop-03cfb8-mllwb5a6.hcp.canadacentral.azmk8s.io
-
Get the credentials to connect to the cluster.
az aks get-credentials --resource-group $RESOURCE_GROUP --name $CLUSTERNAME
-
Verify you have API access to your new AKS cluster
kubectl get nodes
The output will be something similar to the this:
NAME STATUS ROLES AGE VERSION aks-nodepool1-30304837-vmss000000 Ready 10m v1.29.2 aks-nodepool1-30304837-vmss000001 Ready 10m v1.29.2
To see more details about your cluster:
kubectl cluster-info
The output will be something similar to the this:
Kubernetes control plane is running at https://aks-zero-t-rg-zero-trust-wo-03cfb8-b3feb0f8.hcp.canadacentral.azmk8s.io:443 CoreDNS is running at https://aks-zero-t-rg-zero-trust-wo-03cfb8-b3feb0f8.hcp.canadacentral.azmk8s.io:443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy Metrics-server is running at https://aks-zero-t-rg-zero-trust-wo-03cfb8-b3feb0f8.hcp.canadacentral.azmk8s.io:443/api/v1/namespaces/kube-system/services/https:metrics-server:/proxy To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
You should now have a Kubernetes cluster running with 2 nodes. You do not see the master servers for the cluster because these are managed by Microsoft. The Control Plane services which manage the Kubernetes cluster such as scheduling, API access, configuration data store and object controllers are all provided as services to the nodes.
-
Verify the settings required for Calico Cloud.
az aks show --resource-group $RESOURCE_GROUP --name $CLUSTERNAME --query 'networkProfile'
You should see "networkPlugin": "azure" and "networkPolicy": null (networkPolicy will just not show if it is null).
-
Verify the transparent mode by running the following command in one node
VMSSGROUP=$(az vmss list --output table | grep -i $RESOURCE_GROUP | grep -i $CLUSTERNAME | awk -F ' ' '{print $2}') VMSSNAME=$(az vmss list --output table | grep -i $RESOURCE_GROUP | grep -i $CLUSTERNAME | awk -F ' ' '{print $1}') az vmss run-command invoke -g $VMSSGROUP -n $VMSSNAME --scripts "cat /etc/cni/net.d/*" --command-id RunShellScript --instance-id 0 --query 'value[0].message' --output table
output should contain "mode": "transparent"
-
Stop the Cluster until the workshop starts.
az aks stop --resource-group $RESOURCE_GROUP --name $CLUSTERNAME
-
To start your cluster when the workshop time has come use:
az aks start --resource-group $RESOURCE_GROUP --name $CLUSTERNAME