Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Change in diskspace does not reflect in zfs reporting #16222

Closed
zenny opened this issue May 24, 2024 · 8 comments
Closed

Change in diskspace does not reflect in zfs reporting #16222

zenny opened this issue May 24, 2024 · 8 comments
Labels
Type: Defect Incorrect behavior (e.g. crash, hang)

Comments

@zenny
Copy link

zenny commented May 24, 2024

System information

Type Version/Name
Distribution Name VoidLinux
Distribution Version Rolling
Kernel Version 6.1.83_1
Architecture amd64
OpenZFS Version 2.2.4_1

Describe the problem you're observing

There is reportedly no space left on device. So I tried to delete a bunch of huge files (more than 200GB), particularly from /xtbmr/HOMEPOOL/HOME and /xtbmr/DATAPOOL/CONTAINERS dataset (see below).

Even after deleting a huge bunch of files, the free space does not show up neither with df -h nor zfs list

Describe how to reproduce the problem

Deleted a lot of files like more than 200 GB from one of the zfs datasets (particularly deprecated ISO images of the distros), but still shows that the deleted space is not shown neither using df -h nor zfs list

Nevetheless, zpool list shows 128GB free.

# zpool list
NAME    SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP    HEALTH  ALTROOT
xtbmr  9.06T  8.94T   128G        -         -    81%    98%  1.00x    ONLINE  -

zpool status shows no errors.

# zpool status
  pool: xtbmr
 state: ONLINE
status: Some supported and requested features are not enabled on the pool.
        The pool can still be used, but some features are unavailable.
action: Enable all features using 'zpool upgrade'. Once this is done,
        the pool may no longer be accessible by software that does not support
        the features. See zpool-features(7) for details.
  scan: scrub repaired 0B in 1 days 03:19:34 with 0 errors on Thu May  2 10:11:44 2024
config:

        NAME                        STATE     READ WRITE CKSUM
        xtbmr                       ONLINE       0     0     0
          mirror-0                  ONLINE       0     0     0
            wwn-0x5000c500a3ecb69f  ONLINE       0     0     0
            wwn-0x5000c500a3eaef9e  ONLINE       0     0     0

errors: No known data errors

Yet, it does not report the space, released by deleted files and directories from HOMEPOOL and DATAPOOL datasets.

# df -h
Filesystem                 Size  Used Avail Use% Mounted on
devtmpfs                    16G     0   16G   0% /dev
tmpfs                       16G     0   16G   0% /dev/shm
tmpfs                       16G  1.6M   16G   1% /run
/dev/sda2                   79G   62G   14G  83% /
cgroup                      16G     0   16G   0% /sys/fs/cgroup
xtbmr                      128K  128K     0 100% /xtbmr
xtbmr/BITCOIN               80M   80M     0 100% /xtbmr/BITCOIN
xtbmr/DATAPOOL             128K  128K     0 100% /xtbmr/DATAPOOL
xtbmr/DATAPOOL/BACKUPS     128K  128K     0 100% /xtbmr/DATAPOOL/BACKUPS
xtbmr/DATAPOOL/CONTAINERS  317G  317G     0 100% /xtbmr/DATAPOOL/CONTAINERS
xtbmr/DATAPOOL/DOCKER      833M  833M     0 100% /xtbmr/DATAPOOL/DOCKER
xtbmr/DATAPOOL/DOWNLOADS   215G  215G     0 100% /xtbmr/DATAPOOL/DOWNLOADS
xtbmr/DATAPOOL/IMAGES      3.8G  3.8G     0 100% /xtbmr/DATAPOOL/IMAGES
xtbmr/DATAPOOL/ISO         5.7G  5.7G     0 100% /xtbmr/DATAPOOL/ISO
xtbmr/DATAPOOL/PRODOCERO   588G  588G     0 100% /xtbmr/DATAPOOL/PRODOCERO
xtbmr/DATAPOOL/STORAGE     128K  128K     0 100% /xtbmr/DATAPOOL/STORAGE
xtbmr/DOCRAW               1.2T  1.2T     0 100% /xtbmr/DOCRAW
xtbmr/HOMEPOOL             128K  128K     0 100% /xtbmr/HOMEPOOL
xtbmr/HOMEPOOL/BACKUP       95G   95G     0 100% /xtbmr/HOMEPOOL/BACKUP
xtbmr/HOMEPOOL/BOOT         95M   95M     0 100% /xtbmr/HOMEPOOL/BOOT
xtbmr/HOMEPOOL/HOME        4.4T  4.4T     0 100% /xtbmr/HOMEPOOL/HOME
xtbmr/HOMEPOOL/ROOT         30G   30G     0 100% /xtbmr/HOMEPOOL/ROOT
xtbmr/STORAGE              128K  128K     0 100% /xtbmr/STORAGE
xtbmr/STORAGE/PRODOCERO    1.8T  1.8T     0 100% /xtbmr/STORAGE/PRODOCERO
xtbmr/STORAGE/TOMB         128K  128K     0 100% /xtbmr/STORAGE/TOMB
xtbmr/podman               128K  128K     0 100% /xtbmr/podman
xtbmr/podman/store         128K  128K     0 100% /xtbmr/podman/store
tmpfs                       16G   30M   16G   1% /tmp
none                        16G   32K   16G   1% /run/systemd
none                        16G     0   16G   0% /run/user
tmpfs                      100K     0  100K   0% /var/lib/lxd/shmounts
tmpfs                      100K     0  100K   0% /var/lib/lxd/devlxd
tmpfs                      3.2G  4.0K  3.2G   1% /run/user/993
tmpfs                      3.2G   24K  3.2G   1% /run/user/1002

Include any warning/errors/backtraces from the system logs

I have the zpool history -i outputs, but they are very huge in size to post here:

# du -h zpool_history_-i*
405M    zpool_history_-i
88M     zpool_history_-i_DATAPOOL-grepped
48M     zpool_history_-i_HOMEPOOL-grepped

Thanks in advance.

Cheers,
/z

@zenny zenny added the Type: Defect Incorrect behavior (e.g. crash, hang) label May 24, 2024
@zenny
Copy link
Author

zenny commented May 24, 2024

@scineram Thanks for reacting to the post. Would you mind explaining the reason behind your downvote (thumbs down emoji) without any reasons about the issue I raised above? Not fair, technically speaking. Subjective reactions are nothing more than a cancel culture, imho! Prove me wrong! Thanks.

@amotin
Copy link
Member

amotin commented May 24, 2024

@zenny don't you have snapshots on the datasets you are deleting from, that could hold the data?

@zenny
Copy link
Author

zenny commented May 24, 2024

@amotin Thanks for your attention. I have zfs-auto-snapshot on all datasets. However, I have not rolled back to any snapshot, just deleted unnecessary iso installer files and other VMs manually. Yet there is no reflection of the deleted space! An old thread I saw in #1548 resembles with mine, yet it might have been deprecated or maybe not. Cheers and and have a nice weekend.

@amotin
Copy link
Member

amotin commented May 24, 2024

@zenny Space on the pool will not be freed while at least one snapshot references the deleted data.

@zenny
Copy link
Author

zenny commented May 24, 2024

@amotin Thanks. I have not deleted any snapshots, just files. Any way how to handle the situation so that the deletions gets reflected in the unused space? Thanks.

@amotin
Copy link
Member

amotin commented May 24, 2024

@zenny I thought my direction was quite clear. If you have snapshots referencing the deleted data you have to delete (some) snapshots to free space. RTFM.

@rincebrain
Copy link
Contributor

@zenny snapshots are a way of saying "keep the data from when I took the snapshot until I delete this snapshot, even if I modify or delete it later".

So if you, say, wrote a 50 GB file, took a snapshot, then deleted the 50 GB file, it would no longer appear, but since it's still in the snapshot, it will take up space.

The USED value in ZFS is the sum of the 4 usedby properties - usedbydataset, usedbyrefreservation, usedbychildren, and usedbysnapshots. So if you look, and see most of your space usage is in usedbysnapshots, then you know that it's data that's in the snapshots, and not the "current" version of the filesystems, that's taking up the space.

@zenny
Copy link
Author

zenny commented May 25, 2024

@amotin and @rincebrain Thanks for your useful inputs. https://github.com/bahamas10/zfs-prune-snapshots solved the issue, redirected from https://serverfault.com/questions/340837/how-to-delete-all-but-last-n-zfs-snapshots. Thanks and have a nice weekend. Cheers, /z

@zenny zenny closed this as completed May 25, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Type: Defect Incorrect behavior (e.g. crash, hang)
Projects
None yet
Development

No branches or pull requests

3 participants