Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ZFS Initialize works only once #14382

Open
UFTL opened this issue Jan 12, 2023 · 2 comments
Open

ZFS Initialize works only once #14382

UFTL opened this issue Jan 12, 2023 · 2 comments
Labels
Type: Feature Feature request or new feature

Comments

@UFTL
Copy link

UFTL commented Jan 12, 2023

OS: Zorin OS 16.2-r1 (Ubuntu derivative)
Kernel: 5.15.0-57-generic #63~20.04.1-Ubuntu SMP Wed Nov 30 13:40:16 UTC 2022
Machine: HP 17-cp1035cl, 12 GB RAM, 1 TB spinning rust internal drive, ZFS (rpool, bpool) in mirrored rpool setup with 2 (mirrored) SLOG drives

Ok, so I've got this little code blurb set up as a keyboard shortcut (Super+F):
gnome-terminal -- /bin/sh -c 'set zfs:zfs_initialize_value=0; sudo zpool initialize bpool d7335f16-9bd1-1c4d-88b9-e952441dd227; sudo zpool initialize rpool 965d0a40-cce9-664d-8f4a-04c8075238c4 b34bba5d-f7ed-4d3e-95b5-47fd750e05f6 1a7428f8-4950-c248-b947-d8b817a0cd5a b5fd0c2c-0f02-9942-8576-d7b0b851fef1; while sudo zpool status | grep "initializing" > /dev/null; do clear; sudo zpool status -Td; sleep 2; done; clear; sudo zpool status -Td; sleep 15'

It sets the value used by zfs_initialize from '0xdeadbeefdeadbeef' to '0', then initializes all the rpool drives. The idea is to zero unused (free) space on the drives so that backing up the drive to an .img file, then compressing that file, results in a very small .img.7z file (the 1 TB main drive backup compressed to a 2.8 GB file, whereas without zero'ing sectors, it compressed to a 8.4 GB file).

It can also be used when running VMs, to reduce the size of the VMs by zeroing previously-used drive sectors, although I don't use it for that.

It can also be used for security purposes, to ensure sensitive data that resides on previously-used sectors is zero'd, although I don't use it for that.

I had to include the PARTUUID of each drive because otherwise zpool initialize throws the error: "Cannot initialize rpool. 'hole' device not available." That's apparently a bug, not just for zpool initialize.

The problem:
sudo zpool initialize only works the first time. After that, it ends very quickly and doesn't zero free space. I believe this is because zpool initialize keeps a record of its progress, then when it's finished, it doesn't discard this data, so on subsequent runs it believes it's already finished.

I can detach then reattach each drive in rpool (allowing the pool to resilver, then performing a scrub for each drive detached and reattached), and zpool initialize will again work as desired... bolstering the notion that zpool initialize is keeping a record of its progress and not discarding it after its first run.

We have here an excellent opportunity to solve a long-standing issue... that of zero'ing previously-used drive sectors. All we have to do is figure out how to discard the progress data that zpool initialize keeps.

So... I ask the developers of OpenZFS: where is that data and how do we delete it? Can a flag be programmed into zpool initialize such that we can delete that data when we run zpool initialize?

Of course, ideally, it would keep track of the sectors that are data-occupied, keep track of the sectors it's already zero'd, then when run from a cron job, only zero newly-unused sectors.

@rincebrain
Copy link
Contributor

#12451.

@UFTL
Copy link
Author

UFTL commented Jan 12, 2023

I've created a script that allows zpool initialize to work repeatedly, rather than just once... it detaches one drive from the mirror, reattaches it, does the resilver and scrub, then detaches the other drive from the mirror, reattaches it, does the resilver and scrub, does the initialize on the rpool, then turns off the swap partition, zeros the sectors on it, and reestablishes the swap partition with the same UUID (so I don't have to mess with fstab each time I run the script).

https://forum.zorin.com/t/zfs-zero-drives-to-get-better-backup-img-file-compression-while-running-linux/24086/3

Then I boot into the Zorin OS Boot USB stick and use a script I created (dd chained to 7z) to back up the internal drive to a .img.7z file.

It works, I just did the entire procedure, and the .img.7z file was 2.9 GB for a 1 TB drive image (and that after I'd uninstalled GIMP, installed another simpler graphics program, did system updates, updated grub via sudo apt dist-upgrade, and added the Zorin OS Boot USB stick to the grub menu... so with the ZFS CoW, there were definitely sectors that had data on them that were now unused).

@behlendorf behlendorf added the Type: Feature Feature request or new feature label Jan 12, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Type: Feature Feature request or new feature
Projects
None yet
Development

No branches or pull requests

3 participants