Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Data Leak on ZFS #4485

Closed
write0nly opened this issue Apr 4, 2016 · 4 comments
Closed

Data Leak on ZFS #4485

write0nly opened this issue Apr 4, 2016 · 4 comments

Comments

@write0nly
Copy link

Hi All,

I'm having a really strange situation where zfs seems to be writing twice as much data than it should, or occupying twice the space it should.

System is running centos with 3.10.0-327.4.5.el7.x86_64, and the following:

libzfs2-0.6.5.3-1.el7.centos.x86_64
zfs-dkms-0.6.5.3-1.el7.centos.noarch
zfs-0.6.5.3-1.el7.centos.x86_64
spl-0.6.5.3-1.el7.centos.x86_64
spl-dkms-0.6.5.3-1.el7.centos.noarch

On ZFS copies is 1 and compression is lz4
ZPOOL layout is (I know, not recommended) using a partition on /dev/sda3, I can't give it a full disk unfortunately. /dev/sda3 has been created starting on a 4k boundary alignment and ashift has been set to 12 to align as well.

If i run zfs list i get that it's using 1.24T, there are no snaps whatsoever, I just created the pool and ran mongo on it to replicate some data:

NAME              AVAIL   USED  USEDSNAP  USEDDS  USEDREFRESERV  USEDCHILD  RATIO
SSD/data   528G  1.24T         0   1.24T              0          0  1.01x

however, du -sh on the filesystem /local/data, says that it's only got 615GB, ie half of the data seen above.

The zpool has more than 10% of fragmentation.

NAME   SIZE  ALLOC   FREE  EXPANDSZ   FRAG    CAP  DEDUP  HEALTH  ALTROOT
SSD   1.81T  1.24T   586G         -    10%    68%  1.00x  ONLINE  -

zfs get all shows that data written is 1.24T and that referred is also 1.24T, but du only sees half of the data, and the filesystem goes full as my dataset is 1.2T in size.

Can anyone shed some light on this? Is the fact that I am using /dev/sda3 instead of the full device or some strange alignment issue or cluster/block size under occupation the cause of this?

Cheers,
Eduardo.

@kernelOfTruth
Copy link
Contributor

What does it show if you're exporting the pool and re-imported it ?

@write0nly
Copy link
Author

@kernelOfTruth your suggestion is totally right, the space reappears after an export->import,

Before the export:

               capacity     operations    bandwidth
pool        alloc   free   read  write   read  write
----------  -----  -----  -----  -----  -----  -----
SSD         1.75T  60.3G      0      0      0      0
  sda3      1.75T  60.3G      0      0      0      0
----------  -----  -----  -----  -----  -----  -----

Just after the import:

               capacity     operations    bandwidth
pool        alloc   free   read  write   read  write
----------  -----  -----  -----  -----  -----  -----
SSD          903G   953G      0      0      0      0
  sda3       903G   953G      0      0      0      0
----------  -----  -----  -----  -----  -----  -----

The space does come back:

# zfs list -o space
NAME              AVAIL   USED  USEDSNAP  USEDDS  USEDREFRESERV  USEDCHILD
SSD/data          895G   903G         0    903G              0          0
NAME   SIZE  ALLOC   FREE  EXPANDSZ   FRAG    CAP  DEDUP  HEALTH  ALTROOT
SSD   1.81T   903G   953G         -     5%    48%  1.00x  ONLINE  -

However this doesn't help me with the fact that the filesystems fills up completely during my mongo resync...

Is this a known bug/problem?

Thanks!

@write0nly
Copy link
Author

@kernelOfTruth could this be: #1548

@behlendorf
Copy link
Contributor

@write0nly yes it looks that way. Closing as a duplicate of #1548.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants