diff --git a/doc/user-guide/usage.md b/doc/user-guide/usage.md new file mode 100644 index 000000000..93fcc5be8 --- /dev/null +++ b/doc/user-guide/usage.md @@ -0,0 +1,386 @@ +## Deploying LVMCluster CR + +This guide assumes you followed steps in [Readme][repo-readme] and LVM operator +(hereafter LVMO) is running in your cluster. + +Below are the available disks in our test kubernetes cluster node and there are +no existing LVM Physical Volumes, Volume Groups and Logical Volumes + +``` console +sh-4.4# lsblk +NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT +sdb 8:16 0 893.8G 0 disk +|-sdb1 8:17 0 1M 0 part +|-sdb2 8:18 0 127M 0 part +|-sdb3 8:19 0 384M 0 part /boot +`-sdb4 8:20 0 893.3G 0 part /sysroot +sr0 11:0 1 987M 0 rom +nvme0n1 259:0 0 1.5T 0 disk +nvme1n1 259:1 0 1.5T 0 disk +nvme2n1 259:2 0 1.5T 0 disk +sh-4.4# pvs +sh-4.4# vgs +sh-4.4# lvs +``` + +Here LVMO is installed in `lvm-operator-system` namespace via `make deploy` +target and operations will not change if LVMO is installed in any other namespaces. + +``` console +kubectl get pods +NAME READY STATUS RESTARTS AGE +controller-manager-8bf864c85-8zjlp 3/3 Running 0 96s +``` + +After all containers in above listing is in `READY` state we can proceed with +deploying LVMCluster CR + +``` yaml +cat <` will be created +``` console +# kubectl get storageclass +NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE +topolvm-vg1 topolvm.cybozu.com Delete WaitForFirstConsumer true 31m +``` + +Note: +- Reconciling multiple LVMCluster CRs by LVMO is not supported +- Custom resources LVMVolumeGroup and LVMVolumeGroupNodeStatus are managed by + LVMO and users should not edit them. + +## Deploying PVC and App Pod + +- A successful reconciliation of LVMCluster CR will setup all the underlying + resources needed to be able to create a PVC and use it in app pod, the same + can be verified from LVMCluster CR status field +```json +# kubectl get lvmclusters.lvm.topolvm.io -ojsonpath='{.items[*].status.deviceClassStatuses[*]}' | python3 -mjson.tool +{ + "name": "vg1", + "nodeStatus": [ + { + "devices": [ + "/dev/nvme0n1", + "/dev/nvme1n1", + "/dev/nvme2n1" + ], + "node": "kube-node", + "status": "Ready" + } + ] +} +``` +- Create a PVC using the StorageClass created for the deviceClass. The PVC will + not be bound until a pod claims the storage as the volume binding mode is set + to WaitForFirstConsumer. +```yaml +cat < 8080 +Forwarding from [::1]:41685 -> 8080 +... +... + +# in another terminal view the metrics at above port in localhost +curl -s 127.0.0.1:41685/metrics | grep -Ei 'topolvm_volumegroup_.*?_bytes\{' +topolvm_volumegroup_available_bytes{device_class="vg1",node="kube-node"} 4.790222323712e+12 +topolvm_volumegroup_size_bytes{device_class="vg1",node="kube-node"} 4.800959741952e+12 +``` + +## Cleanup + +- Feature to auto cleanup volume groups after removing LVMCluster CR is coming + soon +- Until then, please perform the following steps to cleanup the resources + created by the operator + +### Steps + +1. Remove all the apps which are using PVCs created with topolvm +``` console +# delete App pod first +kubectl delete pod app-file app-block +pod "app-file" deleted +pod "app-block" deleted + +# delete PVCs which were used by above App pods +kubectl delete pvc lvm-file-pvc lvm-block-pvc +persistentvolumeclaim "lvm-file-pvc" deleted +persistentvolumeclaim "lvm-block-pvc" deleted +``` +2. Make sure there are no `logicalvolumes` CRs which were created by topolvm +``` console +kubectl get logicalvolumes.topolvm.cybozu.com +No resources found +``` +3. Take a json dump of LVMCluster CR contents. It has the list of VGs and PVs + created on the nodes. +``` console +kubectl get lvmclusters.lvm.topolvm.io -ojson > /tmp/lvmcr.json +``` +4. Either parse contents from above json file via jq or store status of the CR +``` console +kubectl get lvmclusters.lvm.topolvm.io -ojsonpath='{.items[*].status.deviceClassStatuses[*]}' | python3 -mjson.tool +``` +``` json +{ + "name": "vg1", + "nodeStatus": [ + { + "devices": [ + "/dev/nvme0n1", + "/dev/nvme1n1", + "/dev/nvme2n1" + ], + "node": "kube-node", + "status": "Ready" + } + ] +} +``` +5. Remove LVMCluster CR and wait for deletion of all resources in the namespace + except the operator (controller-manager pod) +``` console +kubectl delete lvmclusters.lvm.topolvm.io lvmcluster-sample +lvmcluster.lvm.topolvm.io "lvmcluster-sample" deleted + +kubectl get po +NAME READY STATUS RESTARTS AGE +controller-manager-8bf864c85-8zjlp 3/3 Running 0 125m +``` +6. Login to the node and remove the LVM volume groups and physical volumes +``` console +sh-4.4# vgremove vg1 --nolock + WARNING: File locking is disabled. + Volume group "vg1" successfully removed +sh-4.4# pvremove /dev/nvme0n1 /dev/nvme1n1 /dev/nvme2n1 --nolock + WARNING: File locking is disabled. + Labels on physical volume "/dev/nvme0n1" successfully wiped. + Labels on physical volume "/dev/nvme1n1" successfully wiped. + Labels on physical volume "/dev/nvme2n1" successfully wiped. +``` +7. Remove the lvmd.yaml config file from the node +``` console +sh-4.4# rm /etc/topolvm/lvmd.yaml +``` +Note: +- Removal of volume groups, physical volumes and lvm config file is necessary + during cleanup or else LVMO fails to deploy Topolvm in next iteration. + +## Uninstalling LVMO + +- LVMO can be removed from the cluster via one of the following methods based + on your installation +``` console +# if deployed via manifests +make undeploy + +# if deployed via olm +make undeploy-with-olm +``` + +[repo-readme]: ../../README.md