Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Deleting Files Doesn't Free Space #4567

Closed
xaragon opened this issue Apr 27, 2016 · 1 comment
Closed

Deleting Files Doesn't Free Space #4567

xaragon opened this issue Apr 27, 2016 · 1 comment
Labels
Status: Inactive Not being actively updated Type: Defect Incorrect behavior (e.g. crash, hang)

Comments

@xaragon
Copy link

xaragon commented Apr 27, 2016

Hello!

I have an issue I can't seem to find a fix for, I have read trough both issue #1188 and #1548. The solutions doesn't seem to work for me.

I have tested mounting and unmounting the pool, even exporting and importing but when deleting files it still uses the same amount of space. I have changed xattr to sa and it have helped me to free space on newly deleted files.

There are no snapshots present so that can't be the problem.

Has anyone seen this behavior before and is there a fix? Have struggled with this for about a week now and my hair is starting the turn grey.

Have also included a couple of codeblocks that might shed some light on the issue.

This is on zfs-initramfs 0.6.5-pve6~jessie and Proxmox v4.2-1

root@prox:# df -h
Filesystem Size Used Avail Use% Mounted on
udev 10M 0 10M 0% /dev
tmpfs 6.3G 17M 6.3G 1% /run
/dev/dm-0 30G 1.5G 27G 6% /
tmpfs 16G 40M 16G 1% /dev/shm
tmpfs 5.0M 0 5.0M 0% /run/lock
tmpfs 16G 0 16G 0% /sys/fs/cgroup
/dev/sdc1 917G 64G 807G 8% /mnt/evo1tb
/dev/sda1 2.7T 1.3T 1.4T 49% /mnt/red3tb
/dev/mapper/pve-data 59G 52M 59G 1% /var/lib/vz
priv_vol1 3.6T 3.2T 399G 89% /mnt/priv_vol1
tmpfs 100K 0 100K 0% /run/lxcfs/controllers
cgmfs 100K 0 100K 0% /run/cgmanager/fs
/dev/fuse 30M 16K 30M 1% /etc/pve
vol1 1.9T 916G 965G 49% /mnt/vol1
vol1/share1 9.0T 8.1T 965G 90% /mnt/vol1/share1
vol1/shared 1.3T 339G 965G 26% /mnt/vol1/shared
root@prox:
#

root@prox:# zpool list -o name,size,allocated,free,freeing
NAME SIZE ALLOC FREE FREEING
priv_vol1 3.62T 3.12T 515G 0
vol1 14.5T 12.8T 1.75T 0
root@prox:
#

root@prox:# zfs list -t snapshot
no datasets available
root@prox:
#

root@prox:# uname -a
Linux prox 4.2.6-1-pve #1 SMP Wed Dec 9 10:49:55 CET 2015 x86_64 GNU/Linux
root@prox:
#

root@prox:# cat /etc/debian_version
8.4
root@prox:
#

@kjp949
Copy link

kjp949 commented Apr 28, 2016

@xaragon

I had this very same issue on ubuntu 16.04. I have 2 3TB drives in a mirror configuration. What I had to do was split apart the mirror, create a new pool on one of the drives with xattr=sa, enable xattr=sa on the other pool, use zfs send/recv to replicate to the "new" pool then wipe out the "old" pool and attach that drive to the "new" pool.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Status: Inactive Not being actively updated Type: Defect Incorrect behavior (e.g. crash, hang)
Projects
None yet
Development

No branches or pull requests

3 participants
@kjp949 @xaragon and others