Solution version release: v2.0
- Scale up cluster size
- Kubernetes version is now
v1.21.2
- Ansible is the tool used for provisioning
- Helm 3 used for managing kubernetes apps
- Nginx Ingress Controller stable version
1.0.0
- Everything is automated. Installing dependencies is now 'one-step script'
- Operating system auto detection upon installing requirements
Ubuntu
used for all components- WordPress database is now deployed as stateful set
- Python 3 support only
- Immutable infrastructure provisioned with Vagrant
- Configuration management with Ansible
- Installing requirements with Bash scripts
- Nginx-Ingress Controller
- Kubernetes control-plane
- Two worker nodes
- NFS server
- HAProxy
- API gateway
- Custom storage class for volume provisioning
- Loadbalancer
Python has officially dropped support for previous versions on 23/01/21
. This solution uses pip and it is now patched to support python 3
The solution is tested end-to-end on both hosts
- MacOs Catalina
10.15.5
with Oracle VirtualBox6.1
, Vagrant2.2.9
- Ubuntu Linux
18.04 LTS
with Oracle VirtualBox6.1
, Vagrant2.2.9
- Current cluster dimensioning is recommended for hosts that have 16Gb RAM or more. If your host has less than that I recommended to provision the cluster with one node less. This can be done by setting the NodeCount value in Vagrantfile to one
NodeCount = 1
# Kubernetes nodes
(1..NodeCount).each do |i|
config.vm.define "node#{i}" do |node|
node.vm.box = "ubuntu/bionic64"
...
Vagrant
,Ansible
,VirtualBox
. Too many players, too many technologies at one place. Convenient but maybe not super stable all of the time. If you experience any errors during provisioning you can always resume from where you stopped either by running only ansible provisioning on certain node or the infrastructure provisioning as well
$ vagrant provision master # To re-provision if something happened during ansible provisioning
$ vagrant up node1 node2 # To continue after master has been re-provisioned manually
- If you discovered new ones email me: [email protected]
NFS dynamic provisioning does not create volumes in v.1.20.x is still present in current version 1.21.2
"I recently upgraded a bare-metal Kubernetes cluster to the latest v1.20.0. But since then, I am no longer able to provision additional PersistentVolumes. Existing PVCs which were already bound prior to the upgrade still work flawlessly. ~ openebs/openebs#3314"
Solutiion used from kubernetes-sigs/nfs-subdir-external-provisioner#25 (comment)
Current workaround is to edit /etc/kubernetes/manifests/kube-apiserver.yaml
Under here:
spec:
containers:
- command:
- kube-apiserver
Add this line:
- --feature-gates=RemoveSelfLink=false
The do this:
$ kubectl apply -f /etc/kubernetes/manifests/kube-apiserver.yaml
$ kubectl apply -f /etc/kubernetes/manifests/kube-apiserver.yaml
On linux you will be asked to run the script with sudo. When running on Mac you will be asked for sudo pass during script execution
$ cd infrastructure
$ ./install_requirements
$ vagrant up
$ ./infra_setup
VM Role | CPU | RAM | IP Address |
---|---|---|---|
Master | 2 | 2 Gb | 172.42.42.100 |
Node 1 | 1 | 1.5 Gb | 172.42.42.101 |
Node 2 | 1 | 1.5 Gb | 172.42.42.102 |
Loadbalancer | 1 | 512 Mb | 172.42.42.10 |
NFS storage | 1 | 512 Mb | 172.42.42.20 |
TOTAL | 6 CPU | 6 GB |
Deploy the demo WordPress site. Check the documentation under website directory
When you are done with using the cluster shut it down to release host resources. Always Power On/Of vagrant provisioned VM's gracefully
$ cd infrastructure
$ vagrant halt # Shutdowns grafecully
$ vagrant up # Starts with config checks
Restart reprovisioning of haroxy
$ vagrant provision haproxy
Not always you have to provision the entire solution. You can select which components to provision
$ vagrant up node3 # Add third node to the cluster
$ vagrant up nfs # Provision only the NFS server
Your host probably has 8Gb RAM and you have provisioned the complete solution which now consumes up to 75% of your available resources. Scale down cluster size by removing one node.
$ vagrant halt node2 # Stop the node temporarily
$ vagrant destroy -f node 2 # Delete the node permanently from the cluster
Yes. You can scale up or down. Change the NodeCount
value in Vagrantfile to match the new cluster size ex: NodeCount = 5
and bring the new nodes up
$ vagrant up node3 node4 node5
Sometimes you maybe want to start over clean. Immutable infrastructure approach allows you to do that in a matter of minutes
$ vagrant destroy -f master node1 node2 nfs haproxy