diff --git a/content/posts/2018-08-23-ceph-erasure-openstack/index.md b/content/posts/2018-08-23-ceph-erasure-openstack/index.md
index ac083c6..1531890 100644
--- a/content/posts/2018-08-23-ceph-erasure-openstack/index.md
+++ b/content/posts/2018-08-23-ceph-erasure-openstack/index.md
@@ -20,7 +20,7 @@ If you're impatient, skip to the solution section 😃
Over the last few months I've been working with the University of Cape Town on the [Ilifu research cloud project](http://www.researchsupport.uct.ac.za/ilifu). The focus for the initial release of the cloud is mainly to provide compute and storage to astronomy and bioinformatics use cases.
-The technology powering this cloud is the ever-growing-in-popularity combination of OpenStack (Queens release) as the virtualisation platform and Ceph (Luminous) as the storage backend. We're utilising the [Kolla](https://github.com/openstack/kolla) and [Kolla-ansible](https://github.com/openstack/kolla-ansible) projects to deploy the OpenStack side of things. I am the lead on the Ceph deployment and opted for the [Ceph-ansible](https://github.com/ceph/ceph-ansible) method of deployment.
+The technology powering this cloud is the ever-growing-in-popularity combination of OpenStack (Queens release) as the virtualization platform and Ceph (Luminous) as the storage backend. We're utilising the [Kolla](https://github.com/openstack/kolla) and [Kolla-ansible](https://github.com/openstack/kolla-ansible) projects to deploy the OpenStack side of things. I am the lead on the Ceph deployment and opted for the [Ceph-ansible](https://github.com/ceph/ceph-ansible) method of deployment.
We ran into some issues getting the OpenStack services to work on the Ceph cluster when using erasure coded pools...
diff --git a/content/posts/2024-03-04-ceph-importing-cinder-volumes-without-glance/index.md b/content/posts/2024-03-04-ceph-importing-cinder-volumes-without-glance/index.md
index 2890963..4b919a2 100644
--- a/content/posts/2024-03-04-ceph-importing-cinder-volumes-without-glance/index.md
+++ b/content/posts/2024-03-04-ceph-importing-cinder-volumes-without-glance/index.md
@@ -5,9 +5,9 @@ cover:
alt: "Ubuntu Login Prompt That Says Login Failed."
#caption: "I'm sorry Dave, I'm afraid I can't do that."
author: "Eugene de Beste"
-title: "Recovering Cloud Virtual Machine Access without GRUB (QEMU/OpenStack)"
+title: "Accelerate Cinder Volume Imports in OpenStack By Avoiding Glance"
date: "2024-03-04"
-description: It's possible to directly manipulate Ceph to speed up the importing of Cinder volumes into OpenStack, rather than using Glance to fist upload an image to and then convert to a volume.
+description: While it may seem daunting, delving into the underlying tooling can often prove worthwhile for speeding up operations. When importing qcow2 or raw images to Ceph-backed Cinder, getting your hands a little dirty can significantly expedite the process.
categories:
- Technology
tags:
@@ -24,7 +24,37 @@ tags:
- Troubleshooting
showtoc: false
-draft: true
---
-# Cc
\ No newline at end of file
+If you've ever had to export and move volumes from one OpenStack cloud to another, you may know the following process: **_converting a Cinder volume to a Glance image to allow you to download it so that the reverse process can be applied when importing it_**. This is obviously a laborious process, especially if there are a ton of volumes to port.
+
+I recently had to do a bulk import of more than 100 volumes in `qcow2` format for a client migrating to our cloud. The prospect of applying the above process to each of these images had me crawling in my skin. Instead, I followed this approach:
+
+1. Download a `qcow2` image from the website provided by the client onto one of my Ceph nodes.
+2. Convert the `qcow2` image to `raw`, as this is more appropriate for Ceph storage. Ceph handles many of the features that `qcow2` provides at the RBD level.
+ ```bash
+ qemu-img convert -pf qcow2 -O raw .qcow2 .img
+ ```
+3. Get the resulting size of the raw image
+ ```bash
+ ls -laph .img
+ ```
+4. Create a volume with the appropriate name in OpenStack, then get the ID
+ ```bash
+ openstack volume create --size=
+ ID=$(openstack volume show -c id -f value)
+ ```
+5. Identify the RBD volume location
+ ```bash
+ rbd -p ls | grep $ID
+ ```
+ You should see something like `volume-`. Now the volume's location in Ceph is known.
+6. Delete the empty RBD volume in Ceph and replace it with the converted raw image
+ ```bash
+ rbd -p rm volume-
+ rbd -p import .img volume-
+ ```
+
+Once done, things can be verified by spinning up a VM and attaching the volume to said VM. The imported data should be visible to the VM. This can also be done for bootable volumes.
+
+Wrapping this process up in some scripts can save **_hours_** of manual work!
\ No newline at end of file
diff --git a/content/posts/_draft-linux-windows-gaming-vm/index.md b/content/posts/_draft-linux-windows-gaming-vm/index.md
index 7b5e825..dd2f27c 100644
--- a/content/posts/_draft-linux-windows-gaming-vm/index.md
+++ b/content/posts/_draft-linux-windows-gaming-vm/index.md
@@ -55,7 +55,7 @@ This section won't be an exhaustive list of concepts and their descriptions, but
Virtualization came out of a desire to maximise the utilisation of physical hardware and for portability of software. Before containers were so common, people used to create virtual machines for their applications in order to ship dependencies or scale their services. It’s hard to believe, because it seems like such a clunky approach compared to what we have now, but containers were not well developed or as ubiquitous. The general idea is that you have some agent or driver, referred to as a “hypervisor” which is responsible for mimicking what actual hardware would look like to an operating system. This hypervisor will reserve hardware (such as memory) for use by the guest.
-The guest operating system’s full stack is installed to a physical or virtualised drive. This means that the kernel, operating system, libraries and everything else are installed. The operating system inside the virtual machine may never know that it is not running on physical hardware.
+The guest operating system’s full stack is installed to a physical or virtualized drive. This means that the kernel, operating system, libraries and everything else are installed. The operating system inside the virtual machine may never know that it is not running on physical hardware.